0% found this document useful (0 votes)
5 views

Unit 4

Uploaded by

10321210121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit 4

Uploaded by

10321210121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Functional Programming

Unit-4
C Program- Not a FPL
#include <stdio.h>

int main() {
int num1, num2, sum;

printf("Enter the first number: ");


scanf("%d", &num1);

printf("Enter the second number: ");


scanf("%d", &num2);

sum = num1 + num2;

printf("The sum of %d and %d is %d\n", num1, num2, sum);

return 0;
}
Haskell - FPL
main :: IO ()
main = do
putStrLn "Enter the first number: "
input1 <- getLine
putStrLn "Enter the second number: "
input2 <- getLine
let num1 = read input1 :: Int
let num2 = read input2 :: Int
let sum = num1 + num2
putStrLn $ "The sum of " ++ show num1 ++ " and " ++ show num2 ++ " is " ++ show sum
C Vs Haskell
• Lets have a look at C and Haskell prog to add
content of two matrices to understand the
differences between two programs and
approaches of these languages.

• Next is C code then Haskell code


#include <stdio.h> // Input the elements of the second matrix
printf("Enter the elements of the second matrix:\
int main() { n");
int rows, columns; for (int i = 0; i < rows; i++) {
for (int j = 0; j < columns; j++) {
scanf("%d", &matrix2[i][j]);
// Input the dimensions of the matrices
}
printf("Enter the number of rows: ");
}
scanf("%d", &rows);
// Calculate the sum and store it in the result
printf("Enter the number of columns: "); matrix
scanf("%d", &columns); for (int i = 0; i < rows; i++) {
for (int j = 0; j < columns; j++) {
// Declare the matrices resultMatrix[i][j] = matrix1[i][j] + matrix2[i][j];
int matrix1[rows][columns]; }
int matrix2[rows][columns]; }
int resultMatrix[rows][columns]; // Display the sum matrix
printf("The sum of the matrices is:\n");
for (int i = 0; i < rows; i++) {
// Input the elements of the first matrix
for (int j = 0; j < columns; j++) {
printf("Enter the elements of the first matrix:\n");
printf("%d\t", resultMatrix[i][j]);
for (int i = 0; i < rows; i++) { }
for (int j = 0; j < columns; j++) { printf("\n");
scanf("%d", &matrix1[i][j]); }
} return 0;
} }
main :: IO () -- Function to read a matrix with n rows
main = do and m columns
putStrLn "Enter the dimensions of the matrices
(rows columns):"
readMatrix :: Int -> Int -> IO [[Int]]
dimensions <- getLine readMatrix n m = sequence [readRow | _
let [n, m] = map read (words dimensions) <- [1..n]]

putStrLn "Enter the first matrix:" -- Function to add two matrices


matrix1 <- readMatrix n m addMatrices :: [[Int]] -> [[Int]] -> [[Int]]
addMatrices mat1 mat2 = zipWith (zipWith
putStrLn "Enter the second matrix:"
(+)) mat1 mat2
matrix2 <- readMatrix n m

let resultMatrix = addMatrices matrix1 matrix2


-- Function to display a matrix
showMatrix :: [[Int]] -> IO ()
putStrLn "The sum of the matrices is:" showMatrix mat = mapM_ (putStrLn .
showMatrix resultMatrix unwords . map show) mat

-- Function to read a single row of numbers from the


user
readRow :: IO [Int]
readRow = do
row <- getLine
return (map read (words row))
Comparison points
• Haskell Program:
– Functional Programming: The Haskell program is written in a functional programming paradigm, which
emphasizes immutability and declarative programming.
– Type Inference: Haskell uses strong type inference, and type information is inferred by the compiler.
– Dynamic Array Sizes: In Haskell, lists can dynamically adjust their sizes as needed, which simplifies
handling matrix dimensions.
– Simplicity: Haskell code tends to be concise and expressive due to its high-level abstractions.

• C Program:
– Procedural Programming: The C program follows a procedural programming paradigm, which relies on
procedures and functions.
– Explicit Type Declarations: In C, data types need to be explicitly declared, and there is no type
inference.
– Fixed Array Sizes: C requires specifying fixed array sizes in advance, making it less flexible when
dealing with matrices of variable dimensions.
– Manual Memory Management: C requires manual memory management, including dynamic
allocation and deallocation of memory.
– Imperative Style: C uses an imperative style of programming, which involves a sequence of statements
that modify the program's state.
– Lower-Level Language: C is closer to the machine and provides fine-grained control over memory and
hardware, which can be advantageous for performance-critical applications.
FP - Introduction
• treats computation as the evaluation of
mathematical functions

• avoids changing state and mutable data.

• functions are treated as first-class citizens,


meaning they can be assigned to variables,
passed as arguments to other functions, and
returned as values from functions.
Key Concepts - FP
• Immutability: Data, once created, cannot be changed. Instead of modifying
existing data structures, new ones are created with the desired changes.

• Pure Functions: Functions in functional programming are pure, meaning


they produce the same output for the same input and have no side effects.
They do not modify external state.

• Higher-Order Functions: Functions can take other functions as arguments


or return them as results. This enables the composition of functions and
the creation of more abstract and reusable code.

• Recursion: Loops are replaced with recursive function calls. This is a


fundamental concept in functional programming.
• Referential Transparency: The result of a function depends only
on its inputs, allowing for easy substitution of function calls with
their results.

• Declarative Style: Functional programming promotes a


declarative coding style, where the emphasis is on describing
what needs to be done rather than how it should be done. This
leads to more concise and readable code.

• Lazy Evaluation: In some functional languages, expressions are


not evaluated until their results are actually needed. This can
improve efficiency by avoiding unnecessary computations.
Implementation of a FPL
The following points may be necessary in implementation of
a functional programming language:

• Design the Language • Error Handling


• Lexical Analysis • I/O and Interoperability
• Syntax Analysis (Parsing) • Optimization
• Documentation
• Semantic Analysis
• Testing
• Evaluation or Code • Tooling
Generation • Community and Support
• Memory Management • Distribution
• Standard Library
• Concurrency and
Parallelism
Values and Operations in FP
• Values: Values are concrete instances of data that belong to a
specific type. For example, in a functional programming language,
an integer value might belong to the "Int" type, and a boolean
value might belong to the "Bool" type. Values are the actual data
that programs operate on, manipulate, and compute with.

• Operations: Operations refer to the functions, methods, and


actions that can be performed on values of a specific type. These
operations define how values can be combined, transformed, and
manipulated. In functional programming, operations are typically
performed by applying functions to values, and these operations
are strongly associated with the types of the values involved.
Data Types in FP
• Integers (e.g., Int)
• Floating-Point Numbers (e.g., Float, Double)
• Booleans (e.g., Bool)
• Characters (e.g., Char)
• Strings (e.g., String)
• Lists (e.g., [a] where a is a type variable)
• Tuples (e.g., (a, b) where a and b are type variables)
• Records or Structs (e.g., { field1 :: Type1, field2 :: Type2 })
• Algebraic Data Types (e.g., data declarations in Haskell)
• Option Types (e.g., Maybe a in Haskell)
• Sum Types (e.g., Either a b in Haskell)
• Polymorphic Types (e.g., generics or type variables)
• User-Defined Types (defined using data or type in Haskell)
Algebraic Data Types – Sum & Product
• In functional programming, sum types and
product types are algebraic data types

• these allow the creation of complex structures


by combining simpler types.

• They are also known as algebraic data types


because their structure can be described
algebraically.
Sum vs Product types
• Sum Type (or Disjoint Union):
– A sum type represents a choice among two or more alternatives. It is formed by
combining types using the logical OR operation.
– Each variant of a sum type represents one of the alternatives.
– It is often used to express situations where a value can have one of several distinct forms
or states.
– Enums, tagged unions, either types, and result types are common examples of sum types.

• Product Type (or Record):


– A product type combines multiple types into a single type. It is formed by combining
types using the logical AND operation.
– Each field of a product type represents one component of the combined type.
– It is used to express situations where a value is a combination of multiple elements with
different types.
– Tuples, records, structs, and classes (with attributes) are common examples of product
types.
Sum Types
• Enumeration ADT:
– An enumeration ADT represents a finite set of distinct, named values.
• Tagged Union ADT:
– A tagged union ADT combines different types, each tagged with a unique identifier, into a single value.
• Variant ADT:
– A variant ADT allows a type to take on one of several possible forms, representing a choice among alternatives.
• Union Type ADT:
– A union type ADT allows a variable to hold values of more than one type, indicating a choice between those
types.
• Option Type ADT:
– An option type ADT represents a choice between a value and an absence of value (commonly represented by
Some and None).
• Either Type ADT:
– An either type ADT represents a choice between two types, often used to indicate success or failure.
• Result Type ADT:
– A result type ADT is similar to an either type, representing a choice between a successful result and an error or
failure.
• Sum List ADT:
– In functional programming, lists or sequences of sum types can be used to represent a choice among several
values.
Product types
• Tuple:
– A tuple is a product type that combines a fixed number of elements of different types into a single value.
• Record:
– A record is a product type that combines named fields, each with its own type, into a single value.
• Product Struct:
– Similar to records, product structs in some languages allow the combination of named fields into a
single structured value.
• Class Instances:
– In object-oriented languages, instances of classes can be considered product types if they encapsulate
multiple attributes.
• Product Enumeration:
– An enumeration with associated values can be considered a product type where each case carries
additional data.
• Pair:
– A pair is a simple form of a tuple that contains two elements.
• Date-Time Structure:
– In some languages, date-time structures that combine date and time components into a single value are
examples of product types.
List and operations
• a list is a fundamental data structure used to store a
sequence of values.

• Lists can be represented as linked lists or arrays, and they


are typically homogeneous,

• Lists are often used for various purposes, such as


– holding collections of data,
– iterating over elements,
– performing transformations
– and filtering.
Operations on Lists
• Creating a List:
– Lists can be created by enclosing a sequence of values within square brackets in languages like Haskell
or using constructors like cons in Lisp.
• Accessing Elements:
– You can access elements of a list by their position (index). In functional programming, the first
element is usually at index 0.
• Adding Elements:
– Lists are typically immutable, so adding elements involves creating a new list with the desired
elements.
• Concatenation:
– Combining two or more lists to create a new list.
• Mapping:
– Applying a function to each element of the list to create a new list.
• Filtering:
– Creating a new list by selecting elements that satisfy a given predicate.
• Folding (Reduce):
– Reducing a list to a single value using a binary function (e.g., summing all elements).
• Iteration:
– Traversing a list to perform an operation on each element.
• Zip:
– Combining two or more lists element-wise to create a new list.
• Sorting:
– Rearranging the elements of a list in a specific order.
• Splitting and Joining:
– Dividing a list into smaller sublists or joining sublists into a larger
list.
• Recursion:
– Solving problems that involve lists using recursive functions.
Lexical Scope
• Lexical Scope (Static Scope)

– Lexical scope is determined by the placement of variables in the source code.

– The scope of a variable is determined by its location in the code, specifically where it is
declared.

– In languages like Python, Java, or C, the scope of a variable is determined by the block
of code in which it is declared.

– Inner blocks have access to variables declared in outer blocks.

– Easier to understand and reason about, as the scope is evident from the code
structure.

– Can lead to issues if there are many nested functions or closures.


• Python code showing lexical scoping:

def outer_function():
x = 10

def inner_function():
print(x) # inner_function can access x from outer_function's scope

inner_function()

outer_function()
Dynamic Scope
• Dynamic Scope
– Dynamic scope is determined by the call stack during runtime.

– The scope of a variable is based on the sequence of function calls that


lead to the current point in the program.

– In languages that use dynamic scope, a function can access variables from
the calling function's scope, not just its own scope.

– Can be more flexible in certain situations, as the scope is based on the


execution context.

– Can be harder to predict and may lead to unexpected behavior, especially


in larger programs.
• LISP code for dynamic scoping

(defun outer-function ()
(let ((x 10))
(inner-function)))

(defun inner-function ()
(print x)) ; inner-function can access x from the calling
function's scope

(outer-function)
Binding Values and Functions
• Binding refers to the association of an identifier with a value or
function.

• it means that the name is a reference or a way to access that


specific value or function

• Binding occurs when a variable is declared and assigned a value


or when a function is defined.

• There are two main types of binding in functional programming:


– Value Binding
– Function Binding
• Binding is closely related to immutability.

• Once a name is bound to a value or function, it cannot be changed.

• This contributes to the referential transparency and predictability of


functional programs.

• Usually, lexical scoping determines how binding is resolved.

• enables the use of names to refer to values and functions,

• Helps in the development of clear, expressive, and predictable code.


Value Binding
• In value binding, a name is associated with a
specific data value. For example, in Haskell:
x :: Int
x = 42

• Here, the name "x" is bound to the integer


value 42. Any reference to "x" in the program
will be replaced with the value 42.
Functional Binding
• In function binding, a name is associated with a function or
a set of computations.
• For example, in Haskell:

square :: Int -> Int


square n = n * n

• Here, the name "square" is bound to a function that


squares its input. Any reference to "square" in the program
will be replaced with the function definition.
Threads
• In programming, a thread is the smallest unit
of execution within a process.
• A process can have multiple threads, each
executing independently but sharing the same
resources (such as memory space).
• Threads are a way to achieve concurrent
execution, enabling multiple tasks to be
performed simultaneously.
Key concepts: Threads
• Thread Creation:
– Threads can be created within a process using threading libraries or features provided
by the programming language.

• Concurrency:
– Threads allow multiple tasks to run concurrently, which is particularly useful for
applications with parallelizable tasks.

• Shared Resources:
– Threads within the same process share the same memory space and resources. This
makes communication between threads more straightforward but also requires
careful synchronization to avoid conflicts.

• Thread Lifecycle:
– Threads typically go through various states, including creation, running, waiting, and
termination.
class MyThread extends Thread {
public void run() {
for (int i = 0; i < 5; i++) {
System.out.println(Thread.currentThread().getId() + " Value " + i);
}
}
}

public class ExampleThreadUsage {


public static void main(String[] args) {
MyThread t1 = new MyThread();
MyThread t2 = new MyThread();

t1.start(); // Start the first thread


t2.start(); // Start the second thread

// The main thread continues to execute concurrently with t1 and t2

// You can also use anonymous classes to define threads


Thread t3 = new Thread(new Runnable() {
public void run() {
for (int i = 0; i < 5; i++) {
System.out.println(Thread.currentThread().getId() + " Value " + i);
}
}
});

t3.start(); // Start the third thread


}
• The MyThread class extends the Thread class and overrides the
run method. This method defines the code that will be executed
when the thread is started.

• In the main method, two instances of MyThread are created (t1


and t2), and their start methods are called. The start method
internally calls the run method in a new thread of execution.

• The main thread continues to execute concurrently with t1 and t2.

• An anonymous class implementing the Runnable interface is used


to define another thread (t3). This is an alternative way to create
threads in Java.
Processes
• A process is an independent program or a unit of work
that runs in its own memory space and executes a
sequence of instructions.

• A process is the execution of a program, along with its


current state, which includes variables, registers, program
counter, and other data.

• Each process operates independently of other processes,


and the operating system (OS) manages their execution.
Stages of Process
• The life cycle of a process involves several
stages.
– Creation
– Ready
– Running
– Blocked (Wait or Sleep)
– Termination
Can you draw the process state diagram?
Process- Types
• Batch Processes:
– Batch processes are designed to process a set of data or tasks without user interaction.
– They are typically non-interactive and execute in the background.
– Examples include payroll processing and report generation.
• Interactive Processes:
– Interactive processes involve user interaction.
– They respond to user inputs in real-time and often have a user interface.
– Examples include command-line interfaces, graphical applications, and games.
• Foreground Processes:
– Foreground processes are processes that run in the foreground, and the user interacts with them
directly.
– They typically have control of the terminal or user interface.
– Examples include command-line programs that execute in the foreground.
• Background Processes:
– Background processes run independently of the user interface and do not require direct user
interaction.
– They execute in the background and are often used for tasks that do not need immediate attention.
– Examples include system daemons and scheduled tasks.
• Real-time Processes:
– Real-time processes have strict timing requirements, and their responses are time-
sensitive.
– They are designed to meet specific deadlines for execution.
– Examples include control systems in manufacturing and embedded systems.
• Multithreaded Processes:
– Multithreaded processes consist of multiple threads of execution within a single process.
– Threads within a process share the same resources but can run independently.
– Examples include applications that benefit from parallelism, such as web servers.
• Distributed Processes:
– Distributed processes involve the execution of a program across multiple connected
systems.
– Processes communicate and coordinate to achieve a common goal.
– Examples include distributed computing applications and client-server systems.
• Foreground vs. Background Processes:
– Processes can be classified based on whether they run in the foreground or background.
– Foreground processes require user interaction, while background processes run
independently.
• System Processes:
– System processes are essential processes that are part of the
operating system.
– They manage system resources, handle interrupts, and
perform critical tasks.
– Examples include the kernel and system daemons.
• User Processes:
– User processes are initiated and controlled by users.
– They include applications and services initiated by users.
– Examples include word processors, web browsers, and user-
initiated scripts.
Process Synchronization
• Process synchronization is crucial in concurrent and parallel computing
environments to ensure orderly and predictable execution of multiple
processes or threads that share resources.

• Reasons for synchronization may be


– Race Conditions:
• Without synchronization, multiple processes or threads can access shared resources
simultaneously, leading to race conditions. Race conditions occur when the final
outcome of a program depends on the order of execution of threads, and this order
is unpredictable.

– Data Inconsistency:
• When multiple processes or threads access shared data concurrently, it can result in
data inconsistency. For example, if one thread is updating a variable while another is
reading it, the reading thread may get an intermediate or inconsistent value.
• Deadlocks:
– Deadlocks can occur when multiple processes or threads are waiting for each other
to release resources, creating a circular dependency. Synchronization mechanisms
help prevent and resolve deadlocks.
• Resource Contention:
– Shared resources, such as files, database connections, or hardware devices, can
become points of contention. Synchronization ensures that processes or threads
access these resources in a controlled and mutually exclusive manner.
• Orderly Communication:
– Synchronization is essential for processes or threads to communicate with each
other in an orderly way. It allows them to exchange information and coordinate
their activities without conflicting with each other.
• Critical Sections:
– Critical sections are portions of code where shared resources are accessed and
modified. Synchronization mechanisms, such as locks or semaphores, help enforce
mutual exclusion in critical sections to avoid conflicts.
• Consistent State:
– In applications where maintaining a consistent state is crucial, synchronization
helps ensure that multiple processes or threads collectively achieve a consistent
and correct result.
• Preventing Resource Exhaustion:
– Synchronization helps prevent scenarios where multiple processes or threads try
to acquire the same resource simultaneously, leading to resource exhaustion or
contention.
• Correctness and Predictability:
– Synchronization ensures the correctness and predictability of program behavior. It
helps avoid unexpected outcomes that could arise from uncontrolled interactions
between concurrent processes or threads.
• Efficient Resource Utilization:
– Synchronization allows for efficient and controlled use of shared resources,
preventing wasteful contention and ensuring that resources are utilized optimally.
Synchronization Monitors
• Synchronization monitors, or monitors, are a high-level synchronization mechanism

• used in concurrent programming

• control access to shared resources

• coordinate the execution of multiple threads or processes.

• combines the notion of a data structure with a set of procedures, or functions, that
operate on that data structure.

• can be accessed and modified by multiple threads

• encapsulates shared resources and their associated synchronization mechanisms into a


single entity
− We encapsulate the shared
resources, and procedures
using these resources, with
the help of monitors.

− This ensures that the


shared resources are
utilized concurrently.
Key concepts - Monitors
• Mutual Exclusion:
– Monitors ensure that only one thread can execute a procedure within the monitor at any given time.
– This guarantees mutual exclusion and prevents race conditions where multiple threads attempt to
modify shared data concurrently.

• Condition Variables:
– Monitors often include condition variables that allow threads to wait for a specific condition to be
satisfied before proceeding.
– Threads can signal or broadcast to wake up waiting threads when certain conditions are met.

• Encapsulation:
– Monitors encapsulate both data and the procedures that operate on that data, providing a clear and
modular structure.
– This encapsulation simplifies the implementation of critical sections and reduces the likelihood of
errors.

• Atomicity:
– Procedures within a monitor are typically executed atomically, meaning they appear to execute
instantaneously without interruption.
– This atomicity simplifies reasoning about the behavior of the monitor.
public class MonitorExample {
public static void main(String[] args) {
// Shared resource
SharedResource sharedResource = new SharedResource();

// Creating two threads that share the same resource


CounterThread thread1 = new CounterThread(sharedResource, "Thread-1");
CounterThread thread2 = new CounterThread(sharedResource, "Thread-2");

// Start the threads


thread1.start();
thread2.start();

try {
// Wait for both threads to finish
thread1.join();
thread2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}

System.out.println("Main thread exiting.");


}
}
Monitors in Java - Example
• In Java, monitors can be implemented using the synchronized keyword to
create synchronized blocks of code.

• The main method is the entry point of the program, and it runs in the main
thread.

• The main thread creates an instance of SharedResource and two instances


of CounterThread (named Thread-1 and Thread-2).

• The main thread starts both Thread-1 and Thread-2 using the start method.

• The main thread then waits for both Thread-1 and Thread-2 to finish using
the join method.
• After both threads have completed, the main thread proceeds to print the
message "Main thread exiting.“

• The SharedResource class has a counter variable, and the increment


method is declared as synchronized. This ensures that only one thread can
execute the increment operation at a time, preventing race conditions.

• The CounterThread class represents a thread that increments the shared


counter in the run method.

• By using the synchronized keyword in the increment method, we create a


monitor on the SharedResource, ensuring that the increment operation is
atomic and thread-safe.
Concurrent programming
• Concurrent programming involves designing
and implementing programs that can execute
multiple tasks concurrently, allowing different
parts of the program to run in parallel.

• For example, Java provides built-in support for


concurrent programming through features like
threads, synchronization, and high-level
concurrency utilities.
Key components
• Threads:
– Java supports multithreading through the Thread class and the Runnable interface.
– We can create threads by extending the Thread class or implementing the Runnable
interface and passing the instance to a Thread constructor.

class MyThread extends Thread {


public void run() {
// Code to be executed by the thread
}
}

• Thread Lifecycle:
– Threads in Java go through different states, including NEW, RUNNABLE, BLOCKED,
WAITING, TIMED_WAITING, and TERMINATED.
– The start() method is used to begin the execution of a thread, and the run() method
contains the code to be executed by the thread.
• Synchronization
– Java provides synchronization mechanisms to control
access to shared resources and prevent race conditions.
– The synchronized keyword can be used to create
synchronized methods or blocks, ensuring only one
thread can access the critical section at a time.

synchronized void synchronizedMethod() {


// Code in the critical section
}
• High-Level Concurrency Utilities:
– Java offers high-level concurrency utilities in the
java.util.concurrent package, including ExecutorService,
ThreadPoolExecutor, and Future.
– These utilities simplify the management of thread pools,
asynchronous tasks, and parallel execution.
• Atomic Operations:
– Java provides atomic classes in the
java.util.concurrent.atomic package, such as AtomicInteger
and AtomicBoolean, which allow for atomic operations
without using explicit synchronization.
• Thread Pools:
– Thread pools manage a group of worker threads,
improving the efficiency of thread creation and
management.
– The ExecutorService interface provides a
framework for creating and managing thread
pools.

ExecutorService executorService = Executors.newFixedThreadPool(5);


executorService.submit(new MyRunnable());
• Locks and Conditions:
– The ReentrantLock class provides an alternative to synchronized methods for managing
locks, and it supports more advanced features.
– The Condition interface allows threads to coordinate and communicate within a lock-
protected region.
• Thread Safety:
– Writing thread-safe code is essential for concurrent programs. It involves ensuring that
shared data is accessed safely by multiple threads.
– Volatile variables, locks, and other synchronization mechanisms help achieve thread safety.
• Parallel Streams:
– Java 8 introduced the Stream API, which includes parallel streams for parallel processing of
collections.
– Parallel streams simplify parallelization of operations on large data sets.
• Fork/Join Framework:
– The ForkJoinPool and RecursiveTask/RecursiveAction classes provide a framework for
parallelizing recursive algorithms.
• Inheritance of Mutual Exclusion:
– Mutual exclusion is inherited by all procedures
within the monitor. If one procedure is executing,
others are blocked from entering.
• Resource Management:
– Monitors can be used to manage access to shared
resources such as data structures, files, or devices.
– They help prevent conflicts and ensure that
resources are used in a coordinated manner.
Concurrent Objects
• Concurrent objects, also known as concurrent data
structures, are specialized data structures designed for
use in concurrent or parallel programming.

• These objects provide a way to manage shared data in a


multi-threaded environment, where multiple threads can
access and modify the data simultaneously.

• The goal is to ensure thread safety and maintain data


consistency while allowing for efficient parallel execution.
• Concurrent Queues:
– Concurrent queues are data structures that allow multiple threads to insert and remove elements concurrently. They
provide thread-safe operations for enqueueing and dequeueing elements. Examples include
ConcurrentLinkedQueue and LinkedBlockingQueue in Java.
• Concurrent Maps:
– Concurrent maps are thread-safe implementations of the Map interface. They allow multiple threads to access and
modify the map concurrently without the need for external synchronization. Examples include ConcurrentHashMap
in Java.
• Concurrent Sets:
– Concurrent sets are thread-safe implementations of the Set interface. They provide concurrency support for adding,
removing, and querying elements from the set. Examples include ConcurrentSkipListSet in Java.
• Transactional Memory:
– Transactional memory is a concurrency control mechanism that allows multiple threads to execute transactions
concurrently with the guarantee of isolation. It provides atomicity, consistency, isolation, and durability (ACID
properties) similar to database transactions.
• Read-Write Locks:
– Read-write locks are synchronization mechanisms that allow multiple threads to read a shared resource concurrently
but provide exclusive access for writing. This can improve performance in scenarios where read operations are more
frequent than write operations. Java provides ReentrantReadWriteLock for this purpose.
• Double-Checked Locking:
– Double-checked locking is an optimization pattern used in multithreaded environments to reduce the overhead of
acquiring a lock. It involves checking a lock condition before acquiring a lock to avoid unnecessary locking. However,
it needs to be implemented carefully to avoid issues related to thread safety and memory visibility.
Java & Concurrency
• Thread-Based Concurrency:
– Java uses threads as the basic unit of concurrent execution.
• Runnable Interface:
– The Runnable interface allows the definition of tasks that can be executed by threads.
• Executor Framework:
– The Executor framework simplifies thread management, providing facilities for thread creation, pooling,
and lifecycle management.
• Synchronization:
– Java provides mechanisms like synchronized blocks and methods to ensure thread safety in the presence
of shared resources.
• Concurrent Collections:
– The java.util.concurrent package includes thread-safe implementations of common data structures, such
as ConcurrentHashMap and CopyOnWriteArrayList.
• Atomic Operations:
– The java.util.concurrent.atomic package offers atomic variables for performing operations without explicit
synchronization.
• High-Level Concurrency Abstractions:
– Java provides high-level abstractions like Locks, Semaphores, and CountDownLatch for managing complex
synchronization scenarios.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy