Singleton patterns are a common design pattern used in software development to ensure a class has only one instance and provides a global point of access to it. While there are several ways to implement a Singleton in Java, one of the most efficient and recommended methods is the Initialization-on-Demand Holder Idiom, also known as theBill Pugh Singleton. This method leverages the Java language’s guarantees about class initialization, ensuring thread safety and lazy loading without requiring explicit synchronization.
In this blog, we’ll delve into the Bill Pugh Singleton pattern, understand why it’s effective, and implement it in Kotlin.
Bill Pugh is a computer scientist and professor emeritus at the University of Maryland, College Park. He is well-known for his contributions to the field of computer science, particularly in the areas of programming languages, software engineering, and the Java programming language.
One of his most notable contributions is the development of the Skip List, a data structure that allows for efficient search, insertion, and deletion operations. However, in the Java community, he is perhaps best known for his work on improving the thread safety and performance of Singleton pattern implementations, which led to the popularization of the Initialization-on-Demand Holder Idiom, commonly referred to as the Bill Pugh Singleton pattern.
Revisiting the Singleton
The Singleton pattern restricts the instantiation of a class to one “single” instance. This pattern is useful when exactly one object is needed to coordinate actions across the system.
Basic Singleton Implementation in Kotlin
Kotlin
objectBasicSingleton {funshowMessage() {println("Hello, I am a Singleton!") }}
Here, Kotlin provides a concise way to define a Singleton using the object keyword. However, the object declaration is eagerly initialized. If your Singleton has costly initialization and might not always be needed, this could lead to inefficient resource usage.
The Problem with Early Initialization
In some cases, you might want the Singleton instance to be created only when it is needed (lazy initialization). Traditional methods like synchronized blocks can ensure thread safety but can lead to performance bottlenecks. While this approach is more efficient, it involves synchronization during every access, which can be a performance bottleneck. This is where the Bill Pugh Singleton comes into play.
The Initialization-on-Demand Holder Idiom
The Bill Pugh Singleton pattern, or the Initialization-on-Demand Holder Idiom, ensures that the Singleton instance is created only when it is requested for the first time, leveraging the classloader mechanism to ensure thread safety.
Key Characteristics:
Lazy Initialization: The Singleton instance is not created until the getInstance() method is called.
Thread Safety: The class initialization phase is thread-safe, ensuring that only one thread can execute the initialization logic.
Efficient Performance: No synchronized blocks are used, which avoids the potential performance hit.
Bill Pugh Singleton Implementation in Kotlin
Let’s implement the Bill Pugh Singleton pattern in Kotlin.
Step-by-Step Implementation
Define the Singleton Class:We first define the Singleton class but do not instantiate it directly. Instead, we define an inner static class that holds the Singleton instance.
Inner Static Class:The static inner class is not loaded into memory until the getInstance() method is called, ensuring lazy initialization.
Accessing the Singleton Instance:The Singleton instance is accessed through a method that returns the instance held by the inner static class.
Kotlin
classBillPughSingletonprivateconstructor() {companionobject {// Static inner class - inner classes are not loaded until they are referenced.privateclassSingletonHolder {companionobject {val INSTANCE = BillPughSingleton() } }// Method to get the singleton instancefungetInstance(): BillPughSingleton {return SingletonHolder.INSTANCE } }// Any methods or properties for your Singleton can be defined here.funshowMessage() {println("Hello, I am a Bill Pugh Singleton in Kotlin!") }}funmain() {// Get the Singleton instanceval singletonInstance = BillPughSingleton.getInstance()// Call a method on the Singleton instance singletonInstance.showMessage()}
Outpute:
Kotlin
Hello, I am a Bill Pugh Singleton in Kotlin!
Here is the explanation of the Implementation,
Private Constructor: The private constructor() prevents direct instantiation of the Singleton class.
Companion Object: In Kotlin, the companion object is used to hold the Singleton instance. The actual instance is inside the SingletonHolder companion object, ensuring it is not created until needed.
Lazy Initialization: The SingletonHolder.INSTANCE is only initialized when getInstance() is called for the first time, ensuring the Singleton is created lazily.
Thread Safety: The Kotlin classloader handles the initialization of the SingletonHolder class, ensuring that only one instance of the Singleton is created even if multiple threads try to access it simultaneously. In short, The JVM guarantees that static inner classes are initialized only once, ensuring thread safety without explicit synchronization.
Many of us don’t believe in thread safety. Let’s see a practical demonstration of the Bill Pugh Singleton’s thread safety.
Practical Demonstration of Thread Safety
Let’s demonstrate this with a Kotlin example that spawns multiple threads to try to access the Singleton instance concurrently. We will also add logging to see when the instance is created.
Singleton Creation Logging: The also block in val INSTANCE = BillPughSingleton().also { ... } prints a message when the Singleton instance is created. This allows us to observe exactly when the Singleton is initialized.
Multiple Threads: We create and start 10 threads that each tries to get the Singleton instance and call showMessage(threadNumber) on it.
Thread Join: join() ensures that the main thread waits for all threads to finish execution before proceeding.
Expected Output
If the Bill Pugh Singleton pattern is indeed thread-safe, we should see the “Singleton instance created.” message exactly once, no matter how many threads attempt to access the Singleton simultaneously.
Kotlin
Singleton instance created.Hello from Singleton instance! Accessed by thread 0.Hello from Singleton instance! Accessed by thread 1.Hello from Singleton instance! Accessed by thread 2.Hello from Singleton instance! Accessed by thread 3.Hello from Singleton instance! Accessed by thread 4.Hello from Singleton instance! Accessed by thread 5.Hello from Singleton instance! Accessed by thread 6.Hello from Singleton instance! Accessed by thread 7.Hello from Singleton instance! Accessed by thread 8.Hello from Singleton instance! Accessed by thread 9.
Note: Ideally, this sequence is not seen. However, for simplicity, I have shown it in this order. Otherwise, it would be in a random order.
Hence, the output demonstrates that despite multiple threads trying to access the Singleton simultaneously, the instance is created only once. This confirms that the Bill Pugh Singleton pattern is indeed thread-safe. The JVM handles the synchronization for us, ensuring that even in a multithreaded environment, the Singleton instance is created safely and efficiently.
Advantages of Using Bill Pugh Singleton
Thread-Safe: The pattern is inherently thread-safe, avoiding the need for synchronization.
Lazy Initialization: Ensures that the Singleton instance is created only when needed.
Simple Implementation: It avoids the boilerplate code associated with other Singleton implementations.
Readability: The code is concise and easy to understand.
Conclusion
The Bill Pugh Singleton, or Initialization-on-Demand Holder Idiom, is an elegant and efficient way to implement the Singleton pattern, especially when you need lazy initialization combined with thread safety. Kotlin’s powerful language features allow for a concise and effective implementation of this pattern.
This pattern is ideal when working on large applications where resources should be allocated efficiently, and thread safety is a concern. By understanding and utilizing this pattern, you can enhance the performance and reliability of your Kotlin applications.
In the world of software development, design patterns emerge as essential tools, offering time-tested solutions to common challenges. These patterns are not just arbitrary guidelines but are structured, proven approaches derived from the collective experience of seasoned developers. By understanding and applying design patterns, developers can craft efficient, maintainable, and scalable software systems.
Introduction to Design Patterns
Design patterns can be thought of as reusable solutions to recurring problems in software design. Imagine you’re building a house; instead of starting from scratch each time, you use blueprints that have been refined over years of experience. Similarly, design patterns serve as blueprints for solving specific problems in software development. They provide a structured approach that helps in tackling common issues such as object creation, object interaction, and code organization.
From Architecture to Software
The concept of design patterns originated outside the realm of software, rooted in architecture. In the late 1970s, architect Christopher Alexander introduced the idea of design patterns in his book “A Pattern Language.”Alexander and his colleagues identified recurring problems in architectural design and proposed solutions that could be applied across various contexts. These solutions were documented as patterns, forming a language that architects could use to create more functional and aesthetically pleasing spaces.
This idea of capturing and reusing solutions resonated with the software community, which faced similar challenges in designing complex systems. By the 1980s and early 1990s, software developers began to recognize the potential of applying design patterns to code, adapting Alexander’s concepts to address common problems in software architecture.
The Gang of Four
The formalization of design patterns in software development took a significant leap forward with the publication of “Design Patterns: Elements of Reusable Object-Oriented Software” in 1994. This book, authored by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides—collectively known as the “Gang of Four” (GoF)—became a seminal work in the field.
Creational Patterns: Focused on object creation mechanisms, ensuring that a system can be efficiently extended without knowing the exact classes that will be instantiated. Examples include the Singleton, Factory, and Builder patterns.
Structural Patterns: Concerned with the composition of classes or objects, simplifying complex relationships and providing flexible solutions for building larger structures. Examples include the Adapter, Composite, and Decorator patterns.
Behavioral Patterns: Focused on communication between objects, defining how objects interact and distribute responsibilities. Examples include the Observer, Strategy, and Command patterns.
Categories of Design Patterns
The three main categories of design patterns are:
Creational Patterns: Deal with object creation mechanisms.
Structural Patterns: Focus on the composition of classes or objects.
Behavioral Patterns: Concern the interaction and responsibility of objects.
Creational Patterns
These patterns deal with object creation mechanisms, trying to create objects in a manner suitable for the situation.
Singleton: Ensures a class has only one instance and provides a global point of access to it.
Factory Method: Defines an interface for creating an object, but lets subclasses alter the type of objects that will be created.
Abstract Factory: Provides an interface for creating families of related or dependent objects without specifying their concrete classes.
Builder: Separates the construction of a complex object from its representation.
Prototype: Creates new objects by copying an existing object, known as the prototype.
Structural Patterns
These patterns focus on composing classes or objects into larger structures, like classes or object composition.
Adapter: Allows incompatible interfaces to work together by wrapping an existing class with a new interface.
Bridge: Separates an object’s abstraction from its implementation so that the two can vary independently.
Composite: Composes objects into tree structures to represent part-whole hierarchies.
Decorator: Adds responsibilities to objects dynamically.
Facade: Provides a simplified interface to a complex subsystem.
Flyweight: Reduces the cost of creating and manipulating a large number of similar objects.
Proxy: Provides a surrogate or placeholder for another object to control access to it.
Behavioral Patterns
These patterns are concerned with algorithms and the assignment of responsibilities between objects.
Chain of Responsibility: Passes a request along a chain of handlers, where each handler can process the request or pass it on.
Command: Encapsulates a request as an object, thereby allowing for parameterization and queuing of requests.
Interpreter: Defines a representation of a grammar for a language and an interpreter to interpret sentences in the language.
Iterator: Provides a way to access elements of a collection sequentially without exposing its underlying representation.
Mediator: Reduces chaotic dependencies between objects by having them communicate through a mediator object.
Memento: Captures and externalizes an object’s internal state without violating encapsulation, so it can be restored later.
Observer: Defines a one-to-many dependency between objects so that when one object changes state, all its dependents are notified.
State: Allows an object to alter its behavior when its internal state changes.
Strategy: Defines a family of algorithms, encapsulates each one, and makes them interchangeable.
Template Method: Defines the skeleton of an algorithm in a method, deferring some steps to subclasses.
Visitor: Represents an operation to be performed on elements of an object structure, allowing new operations to be defined without changing the classes of the elements on which it operates.”
Why Do We Use Design Patterns?
Design patterns aren’t just buzzwords—they’re powerful tools that make software development smoother and more efficient. Here’s why they’re so valuable:
Reusability: Design patterns provide tried-and-true solutions to common problems. Instead of reinventing the wheel, developers can reuse these patterns, saving time and effort while promoting modularity in software systems.
Improved Communication: Design patterns create a shared language among developers. When everyone understands the same patterns, it’s easier to discuss and make design decisions as a team.
Best Practices: Design patterns encapsulate the wisdom of experienced developers. For those new to the field, they offer a way to learn from the best, ensuring that your code follows industry standards.
Maintainability: Using design patterns often leads to cleaner, more organized code. This makes it easier to update, debug, and extend the codebase as the project evolves.
Easier Problem-Solving: Design patterns provide a structured approach to tackling complex problems. They help break down big issues into manageable parts, making the development process more efficient.
Design patterns are essential tools that enhance code quality, collaboration, and problem-solving, making them a key asset in any developer’s toolkit.
How Do We Choose the Right Design Pattern
Design patterns are like cool tools in your developer toolbox, but it’s important to use them wisely. Here’s what you need to keep in mind:
Think About the Situation: Design patterns shine in the right context. But using them just for the sake of it might not always be the best move. Make sure the pattern fits the problem you’re solving.
Keep It Simple: Sometimes, the simplest solution is the best one. Don’t overcomplicate things by forcing a pattern where a straightforward approach would do the job.
Watch Out for Speed Bumps: Design patterns can sometimes slow down your program. Weigh the pros and cons to see if the benefits outweigh the potential performance hit.
Be Ready to Change: As your project grows, what worked before might not be the best choice anymore. Stay flexible and be prepared to adapt as needed.
Using design patterns is like having a set of handy tools at your disposal. Just remember that not every tool is right for every job. Choose the ones that best fit the situation, and your code will be stronger and more reliable!
Conclusion
The journey of design patterns from architecture to software highlights the power of abstraction and the value of shared knowledge. From their origins in the work of Christopher Alexander to the seminal contributions of the Gang of Four, design patterns have become a cornerstone of software design, enabling developers to build robust, maintainable systems with greater efficiency.
As software development continues to evolve, so too will design patterns, adapting to new challenges and opportunities. By understanding the history and evolution of design patterns, developers can better appreciate their importance and apply them effectively in their work, ensuring that their solutions stand the test of time.
As the adoption of cloud technologies continues to rise, organizations are increasingly reliant on cloud-based applications to drive business operations and deliver services. However, with this reliance comes the imperative need to secure these applications against a myriad of cyber threats. Two critical initiatives have emerged to address these challenges: Cloud Application Security Assessment (CASA) and the App Defense Alliance (ADA). In this article, we will delve into the objectives, methodologies, and impacts of CASA and ADA on the cloud security landscape.
Before understanding CASA, let’s first understand what ADA?
What is ADA(App Defence Alliance)
Launched by Google in 2019, the App Defense Alliance was established to ensure the safety of the Google Play Store and the Android app ecosystem by focusing on malware detection and prevention. With a growing emphasis on app security standards, the Alliance expanded its scope in 2022 and is now the home for several industry-led collaborations including Malware Mitigation, and App Security Assessments for both mobile and cloud applications.
The App Defense Alliance was formed with the mission of reducing the risk of app-based malware and better protecting Android users. Malware defense remains an important focus for Google and Android, and ADA will continue to partner closely with the Malware Mitigation Program members — ESET, Lookout, McAfee, Trend Micro, Zimperium — on direct signal sharing. The migration of ADA under the Linux Foundation will enable broader threat intelligence sharing across leading ecosystem partners and researchers
How ADA Works
The ADA operates through a combination of automated and manual processes:
Automated Scanning: Partner companies use advanced machine learning models and behavioral analysis to scan apps for malicious behaviors, vulnerabilities, and compliance issues.
Human Expertise: Security researchers and analysts review flagged apps, conduct deeper inspections, and provide insights into emerging threats.
Developer Collaboration: ADA partners work closely with app developers to remediate issues, providing guidance on secure coding practices and threat mitigation.
Google Play Protect Integration: ADA findings are integrated into Google Play Protect, Google’s built-in malware protection for Android devices, further enhancing app security for users.
Now, let’s understand CASA and its benefits
What is CASA
Cloud Application Security Assessment (CASA) is a process or set of procedures designed to evaluate the security posture of cloud-based applications. With the increasing adoption of cloud computing, many organizations are migrating their applications to cloud platforms. However, this migration brings forth security challenges as well. CASA helps in identifying vulnerabilities, misconfigurations, and potential threats within cloud-based applications.
The assessment typically involves examining various aspects of cloud applications, such as:
Authentication and Authorization: Reviewing how user identities are managed and how access to resources within the application is controlled.
Data Encryption: Evaluating how data is encrypted both in transit and at rest within the cloud environment.
Network Security: Assessing the network architecture and configurations to ensure secure communication between components of the application.
Compliance: Ensuring that the cloud application adheres to relevant regulatory requirements and industry standards.
Data Protection: Assessing mechanisms in place to protect sensitive data from unauthorized access or leakage.
Logging and Monitoring: Reviewing logging and monitoring practices to detect and respond to security incidents effectively.
Third-Party Dependencies: Assessing the security of third-party services or libraries used within the cloud application.
CASA is crucial for organizations to identify and remediate security vulnerabilities before they can be exploited by attackers. It helps in ensuring the confidentiality, integrity, and availability of data and resources within cloud-based applications. Additionally, CASA can be part of a broader cloud security strategy aimed at mitigating risks associated with cloud adoption.
Benefits of CASA
Risk Mitigation: By identifying and addressing vulnerabilities, CASA helps organizations mitigate the risk of security breaches, data loss, and unauthorized access.
Enhanced Compliance: CASA ensures that cloud applications adhere to industry regulations and standards, reducing the likelihood of legal penalties and enhancing trust with customers.
Improved Incident Response: Through continuous monitoring and logging, CASA enhances an organization’s ability to detect and respond to security incidents swiftly, minimizing the impact of potential breaches.
Increased Resilience: CASA contributes to the overall resilience of cloud applications, ensuring they can withstand attacks and continue to operate securely even in the face of evolving threats.
Security Assessment
To maintain the security of Google user’s data, apps that request access to restricted scopes need to undergo an annual security assessment. This assessment verifies that the app can securely handle data and delete user data upon request. Upon successfully passing the security assessment, the app will be awarded a “Letter of validation” (LOV) from the security assessor, indicating its ability to handle data securely.
Tiering: CASA adapted a risk-based, multi-tier assessment approach to evaluate app risk based on users count, scopes accessed, and other app specific items. Each project will fall under a specific tier.
Accelerator: The CASA accelerator is a tool that minimizes the checks you have to complete based on the certifications you have already passed.
Annual Recertification: All apps must be revalidated every year. The app tier can increase to a higher tier for the following year than what it was the previous year. Once an app has been validated at tier 3 it will continue to be validated at tier 3 level at each following year.
When should I do a security assessment?
Security assessment of an app is the final step of the restricted scopes review process. Before initiating a security assessment of your app, it is important to complete all otherverification requirements. If your app is requesting access to restricted scopes, the Google Trust and Safety team will reach out to you when it’s time to start the security assessment process.
What is OWASP
OWASP stands for the Open Web Application Security Project. It is a nonprofit organization dedicated to improving the security of software. OWASP achieves its mission through community-led initiatives that include open-source projects, documentation, tools, and educational resources. The primary focus of OWASP is on web application security, although its principles and guidelines are often applicable to other types of software as well.
Some key aspects of OWASP include:
Top Ten: OWASP publishes the OWASP Top Ten, a list of the most critical web application security risks. This list is updated regularly to reflect emerging threats and trends in the cybersecurity landscape.
Guidelines and Best Practices: OWASP provides comprehensive guides, cheat sheets, and best practices for developers, security professionals, and organizations to build and maintain secure software.
Tools and Projects: OWASP sponsors and supports numerous open-source projects and tools aimed at improving security practices, testing for vulnerabilities, and educating developers and security practitioners.
Community Engagement: OWASP fosters a vibrant community of cybersecurity professionals, developers, researchers, and enthusiasts who collaborate on various initiatives, share knowledge, and contribute to the advancement of web application security.
Conferences and Events: OWASP organizes conferences, seminars, and workshops around the world to promote awareness of web application security issues and facilitate networking and learning opportunities for its members.
Overall, OWASP plays a crucial role in raising awareness about web application security and equipping organizations and individuals with the knowledge and resources needed to build more secure software.
What is ASVS
ASVS stands for the Application Security Verification Standard. It is a set of guidelines and requirements developed by the Open Web Application Security Project (OWASP) to establish a baseline of security requirements for web applications. The ASVS provides a framework for testing the security controls and defenses implemented in web applications, helping organizations ensure that their applications are adequately protected against common security threats and vulnerabilities.
The ASVS is structured into three levels of verification:
Level 1: This level consists of a set of core security requirements that all web applications should meet to provide a basic level of security. These requirements address fundamental security principles such as authentication, session management, access control, and data validation.
Level 2: Level 2 includes additional security requirements that are relevant for most web applications but may not be essential for all applications. These requirements cover areas such as cryptography, error handling, logging, and security configuration.
Level 3: This level contains advanced security requirements that are applicable to web applications with higher security needs or those handling sensitive data. These requirements address topics such as business logic flaws, secure communication, secure coding practices, and secure deployment.
The ASVS is used by organizations, security professionals, and developers to assess the security posture of web applications, identify potential vulnerabilities, and establish security requirements for development and testing. It provides a standardized approach to web application security verification, enabling consistency and comparability across different applications and environments. Additionally, the ASVS is regularly updated to reflect emerging threats, changes in technology, and best practices in web application security.
What is CWEs
CWE stands for Common Weakness Enumeration. It is a community-developed list of software and hardware weakness types that can serve as a common language for describing software security weaknesses in a structured manner. CWE is maintained by the MITRE Corporation with the support of the US Department of Homeland Security’s National Cyber Security Division.
CWE provides a standardized way to identify, describe, and categorize common vulnerabilities and weaknesses in software and hardware systems. Each weakness type in CWE is assigned a unique identifier and is described in terms of its characteristics, potential consequences, and mitigations.
Some examples of weaknesses covered by CWE include:
Buffer Overflow
SQL Injection
Cross-Site Scripting (XSS)
Insecure Direct Object References
Insufficient Authentication
Use of Hard-Coded Credentials
Improper Input Validation
Insecure Cryptographic Storage
By using CWE, security professionals, developers, and organizations can better understand the nature of vulnerabilities and weaknesses in software systems, prioritize security efforts, and develop more secure software. Additionally, CWE provides a foundation for various security-related activities such as vulnerability assessment, penetration testing, secure coding practices, and security training.
The Intersection of CASA and ADA
Both CASA and ADA play pivotal roles in securing applications, albeit in different contexts. CASA is more focused on comprehensive assessments of cloud applications, while ADA targets the mobile app ecosystem. However, there is an intersection where both initiatives complement each other:
Shared Objectives: Both CASA and ADA aim to identify and mitigate vulnerabilities before they can be exploited by attackers.
Collaborative Approach: CASA and ADA emphasize collaboration—CASA between security teams and cloud service providers, and ADA between Google and cybersecurity firms.
Holistic Security: Organizations can leverage CASA to secure their cloud applications while ensuring their mobile counterparts are safeguarded by ADA’s protections.
Conclusion
As cloud and mobile technologies continue to evolve, the need for robust security frameworks like CASA and initiatives like ADA becomes ever more critical. CASA provides a comprehensive approach to securing cloud-based applications, addressing a wide range of security concerns from architecture to compliance. On the other hand, ADA focuses on protecting the mobile app ecosystem, particularly within the Google Play Store, by detecting and mitigating malicious apps before they reach users.
Together, these initiatives form a crucial part of the broader cybersecurity landscape, ensuring that both cloud-based and mobile applications remain secure in an increasingly interconnected digital world. As threats continue to evolve, ongoing innovation and collaboration in initiatives like CASA and ADA will be essential in maintaining the security and integrity of applications that billions of people rely on every day.
In the vast digital landscape, navigating and identifying resources is crucial. This is where URIs (Uniform Resource Identifiers) and URI schemes come into play. They act as the cornerstones of web navigation, ensuring we can pinpoint the exact information we seek. But what exactly are they, and how do they work together? URIs, or Uniform Resource Identifiers, are like the addresses of the internet, guiding us to the exact location of a resource. Whether you’re a seasoned developer or just starting out, understanding URIs and their schemes is crucial for navigating and utilizing the web efficiently.
In this blog, we will delve deep into what a URI is, explore the concept of URI schemes, and understand their significance in the world of web technologies.
What is a URI?
A Uniform Resource Identifier (URI) is a string of characters used to identify a resource either on the internet or within a local network. Think of it as a unique address that helps in locating resources like web pages, documents, images, or videos, similar to how a postal address identifies a particular location in the real world. The beauty of a URI lies in its simplicity and universality – it provides a standardized way to access a variety of resources across different systems. URIs are essential for the navigation, sharing, and management of web resources.
Components of a URI
A typical URI consists of several components, each serving a specific purpose. Let’s break down a typical URI structure:
Scheme: This initial part defines the protocol used to access the resource. Common examples include http for web pages, ftp for file transfer, and mailto for email addresses.
Authority: This section specifies the location of the resource, often containing the domain name or IP address, and sometimes port numbers. in above example, www.softaai.com:8080 is authority.
Path: The path identifies the specific location of the resource within the designated authority. For instance, in the URI https://www.softaai.com/blog/article.html, the path points to the file “article.html” within the “blog” directory of the website “www.softaai.com”.
Query: This optional part holds additional information used to search, filter or modify the resource. Imagine searching a library catalog. The query string would be like specifying the author or genre to narrow down your search results.
Fragment: This final component refers to a specific section within the resource, often used for internal navigation within a webpage. For example, a URI ending with “#introduction” might jump you directly to the introduction section of a web document.
Examples of URIs
Here are a few examples to illustrate the structure of URIs:
URIs can be broadly categorized into two types: URLs and URNs.
URL (Uniform Resource Locator)
A URL specifies the exact location of a resource on the internet, including the protocol used to access it. For example, https://www.softaai.com/index.html is a URL that tells us we need to use HTTPS to access the ‘index.html’ page on ‘www.softaai.com’.
URN (Uniform Resource Name)
A URN, on the other hand, names a resource without specifying its location or how to access it. It’s like a persistent identifier that remains the same regardless of where the resource is located. An example of a URN is urn:isbn:0451450523, which identifies a book by its ISBN.
Understanding URI Scheme
A URI Scheme is a component of the URI that specifies the protocol or the method to be used to access the resource identified by the URI. It defines the syntax and semantics of the rest of the URI, guiding how it should be interpreted and accessed. The scheme is typically the first part of the URI, followed by a colon (:). Think of URI schemes as the languages spoken by URIs. Each scheme defines a set of rules for how to interpret and access resources. It essentially tells the browser or the software how to handle the URI.
Common URI Schemes
Here are some of the most common URI schemes:
HTTP (Hypertext Transfer Protocol): Accessing web pages and web services. e.g. http://www.softaai.com
HTTPS (HTTP Secure): Accessing web pages and web services in secure way. e.g. https://www.softaai.com
FTP (File Transfer Protocol): Transferring files between computers. e.g. ftp://ftp.softaai.com
MAILTO (Email Address): Sending an email. e.g. mailto:[email protected]
TEL (Telephone Number): Making a phone call through applications. e.g. tel:+1234567890
Each URI scheme defines its own set of rules for how the subsequent components of the URI are structured and interpreted. These schemes are standardized and maintained by the Internet Assigned Numbers Authority (IANA).
Custom URI Schemes
Developers can create custom URI schemes to handle specific types of resources or actions within their applications. For example, a mobile app might register a custom URI scheme like myapp:// to handle deep linking into the app. One more real time example, a music player app might use a spotify: scheme to identify and play songs within its platform.
URI vs. URL vs. URN
It is important to distinguish between three related terms: URI, URL, and URN.
URI (Uniform Resource Identifier): A broad term that refers to both URLs and URNs.
Example: https://www.softaai.com
URL (Uniform Resource Locator): A subset of URI that provides the means to locate a resource by describing its primary access mechanism (e.g., its network location).
Example: http://www.softaai.com/index.html
URN (Uniform Resource Name): A subset of URI that provides a unique and persistent identifier for a resource without providing its location.
Example: urn:isbn:978-3-16-148410-0
Best Practices for Creating URIs
Keep it Simple: Use clear and concise paths.
Use Hyphens for Readability: softaai.com/our-products is more readable than softaai.com/ourproducts.
Avoid Special Characters: Stick to alphanumeric characters and a few reserved characters.
Examples of Well-Formed URIs
https://www.softaai.com/products
ftp://ftp.softaai.com/images
Common Mistakes to Avoid
Spaces: Avoid using spaces in URIs. Use hyphens or underscores instead.
Case Sensitivity: Be mindful of case sensitivity, especially in the path.
Understanding the Power of URIs and URI Schemes
Together, URIs and URI schemes form a powerful mechanism for navigating and accessing information on the web. They offer several advantages:
Universality: URIs provide a standardized way to identify resources, regardless of the underlying platform or application.
Accuracy: URIs ensure users reach the intended resource with minimal ambiguity.
Flexibility: URI schemes allow for customization and expansion, catering to diverse resource types and applications.
Conclusion
URIs are the backbone of the internet, guiding us to the myriad of resources available online. Understanding the components and types of URIs, as well as the importance of URI schemes, is essential for anyone navigating the digital world. As technology evolves, the role of URIs will continue to be pivotal, ensuring that we can access and share information seamlessly. By following best practices in creating and using URIs, we can ensure a smooth and efficient experience for both users and systems. Whether you’re building a website, developing an application, or simply browsing the web, a solid understanding of URIs will empower you to make the most of the resources at your fingertips.
URIs and URI schemes are the unsung heroes of the web. By understanding their structure and functionality, you gain a deeper appreciation for how information is organized and accessed on the internet. The next time you click on a link or enter a web address, remember the silent power of URIs and URI schemes working tirelessly behind the scenes!
In today’s digital age, ensuring secure and seamless access to online services is more critical than ever. Identity Providers (IDPs) play a pivotal role in this process by managing user identities and facilitating authentication across multiple platforms. This blog delves into the intricacies of IDPs, exploring their functionalities, benefits, and importance in the modern digital ecosystem.
What is an Identity Provider (IDP)?
An Identity Provider (IDP) is a system or service that creates, manages, and verifies user identities for authentication and authorization purposes. IDPs are integral to Single Sign-On (SSO) systems, enabling users to log in once and gain access to multiple services without needing to re-enter credentials.
Key Functions of an IDP
Authentication: IDPs validate that users are who they claim to be, typically through username and password verification, multi-factor authentication (MFA), biometrics, or other authentication methods.
Authorization: Once authenticated, IDPs determine what resources and services the user is permitted to access based on predefined roles and permissions.
User Management: IDPs handle the creation, updating, and deletion of user accounts, ensuring that user information is accurate and up-to-date.
Federation: IDPs enable identity federation, allowing users to use a single identity across multiple domains and services, often through protocols like SAML (Security Assertion Markup Language) or OAuth.
How IDPs Work
The operation of an IDP can be broken down into several key steps:
User Request: A user attempts to access a service or application.
Redirection: The application redirects the user to the IDP for authentication.
Authentication: The IDP prompts the user for credentials or another form of authentication.
Validation: The IDP validates the credentials and, if successful, generates an authentication token.
Token Exchange: The IDP sends the authentication token back to the application.
Access Granted: The application verifies the token and grants the user access.
Benefits of Using an IDP
Improved Security: By centralizing authentication, IDPs reduce the risk of password-related breaches and support advanced security measures like MFA.
User Convenience: Users benefit from SSO, which minimizes the need to remember multiple passwords and simplifies the login process.
Cost Efficiency: Organizations can reduce costs associated with managing multiple authentication systems and streamline IT support.
Scalability: IDPs can easily scale to accommodate growing user bases and integrate with new services.
Regulatory Compliance: IDPs help organizations comply with data protection regulations by ensuring secure and consistent user authentication practices.
Popular Identity Providers
Several well-known IDPs dominate the market, each offering unique features and capabilities:
Auth0: Known for its developer-friendly platform and extensive customization options.
Okta: Popular for its robust SSO capabilities and comprehensive identity management solutions.
Microsoft Azure AD: Widely used in enterprise environments, offering seamless integration with Microsoft services.
Google Identity: Integrates well with Google Workspace and other Google services, providing a straightforward user experience.
Ping Identity: Focuses on enterprise-level identity management and security.
IDP Protocols and Standards
IDPs rely on established protocols and standards to ensure secure and interoperable authentication:
SAML (Security Assertion Markup Language): An XML-based standard for exchanging authentication and authorization data between parties.
OAuth: An open standard for access delegation, commonly used for token-based authentication.
OpenID Connect: An identity layer on top of OAuth 2.0, used for verifying user identities and obtaining basic user profile information.
LDAP (Lightweight Directory Access Protocol): A protocol for accessing and maintaining distributed directory information services.
Challenges and Considerations
While IDPs offer numerous benefits, they also present certain challenges:
Complexity: Implementing and managing an IDP can be complex, especially for organizations with diverse IT environments.
Cost: Depending on the provider and the level of service required, costs can vary significantly.
Privacy Concerns: Centralizing user identities can raise privacy concerns if not managed properly, particularly regarding data storage and access.
Future Trends in Identity Management
As digital transformation accelerates, several trends are shaping the future of identity management:
Decentralized Identity: Leveraging blockchain technology to create self-sovereign identities that users control independently of any central authority.
AI and Machine Learning: Enhancing security by detecting anomalous behavior and improving fraud detection.
Passwordless Authentication: Moving towards more secure and user-friendly authentication methods that eliminate the need for passwords, such as biometrics and hardware tokens.
Adaptive Authentication: Implementing dynamic authentication processes that adjust based on the user’s context and risk level.
Conclusion
Identity Providers are the linchpin of secure and efficient digital authentication. They offer robust solutions for managing user identities, enhancing security, and simplifying access to online services. As the digital landscape continues to evolve, IDPs will play an increasingly critical role in ensuring seamless and secure user experiences. Organizations must carefully evaluate their identity management needs and choose the right IDP to stay ahead in the ever-changing digital world.
Exception handling is a critical aspect of Java programming that allows developers to gracefully manage errors and unexpected situations in their code. Java provides a robust mechanism for handling exceptions, which is essential for writing reliable and maintainable applications. In this guide, we’ll explore exception handling in Java in depth, covering various aspects including types of exceptions, try-catch blocks, exception propagation, best practices, and more.
Understanding Exceptions & Exception Handling in Java
Exception
An unexpected, unwanted event that disturbs the normal flow of the program is called an exception. For example, TyredPuncturedException, SleepingException, and FileNotFoundException. It is highly recommended to handle exceptions. The main objective of exception handling is the graceful termination of the program. Exception handling doesn’t mean repairing an exception; rather, it involves providing an alternative way to continue the rest of the program normally. This is the concept of exception handling. For example, if our programming requirement is to read data from a remote file located in London at runtime, and if the London file is not available, our program should not terminate abnormally. Instead, we have to provide some local file to continue the rest of the program normally. This method of defining alternatives is nothing but exception handling.
Java
try {// Read data from the remote file located in London} catch (FileNotFoundExceptione) {// Use a local file and continue the rest of the program normally}
Runtime Stack Mechanism
For every thread, the JVM will create a runtime stack. Each method call performed by that thread will be stored in the corresponding stack. Each entry in the stack is called a stack frame or activation record. After completing every method call, the corresponding entry is removed from the stack. After completing all method calls, the stack will become empty, and that empty stack will be destroyed by the JVM just before terminating the thread. This is an example of when everything goes well. Next, we will discuss the default exception handling in Java.
Default Exception Handling in Java
Inside a method, if any exception occurs, the method in which it is raised is responsible for creating an exception object by including the following:
Name of the exception.
Description of the exception.
Location at which the exception occurs (Stack Trace).
After creating the exception object, the method hands over that object to the JVM. The JVM checks whether the method contains any exception handling code. If the method doesn’t contain exception handling code, the JVM terminates that method abnormally and removes the corresponding entry from the stack. Then, the JVM identifies the caller method and checks whether the caller method contains handling code or not. If the caller method doesn’t contain handling code, the JVM terminates that caller method also abnormally and removes the corresponding entry from the stack. This process continues until the main method. If the main method also doesn’t contain handling code, the JVM terminates the main method abnormally and removes the corresponding entry from the stack. Then, the JVM hands over the responsibility of exception handling to the default exception handler, which is part of the JVM. The default exception handler prints exception information in the following format and terminates the program abnormally:
Java
----------------------------------------------------------------Exception in thread "xxx":Name of the exception:DescriptionStack trace----------------------------------------------------------------
HelloHiException in thread "main"java.lang.ArithmeticException: / by zero at Test.main(Test.java:5)
Note: In a program, if at least one method terminates abnormally, then the program termination is considered abnormal. If all methods terminate normally, then the program termination is considered normal.
Exception Hierarchy
The Throwable class acts as the root for the Java exception hierarchy. The Throwable class defines two child classes:
Exception
Error
Exception: Most of the time, exceptions are caused by our program and are considered recoverable. For example:
Java
try {// Read data from the remote file located in London} catch (FileNotFoundExceptione) {// Use a local file and continue the rest of the program normally}
Error: Errors are non-recoverable. For example, if an OutOfMemoryError occurs, being a programmer, we cannot do anything, and the program will be terminated abnormally. System admins or server admins are responsible for increasing heap memory.
Checked exceptions are the exceptions that are checked at compile-time. These exceptions are subclasses of Exception but not subclasses of RuntimeException. Examples of checked exceptions include IOException, SQLException, and FileNotFoundException. It is mandatory for a method to either handle these exceptions using a try-catch block or declare them using the throws keyword in the method signature.
Unchecked Exceptions
Unchecked exceptions are the exceptions that are not checked at compile-time. They are subclasses of RuntimeException. Examples of unchecked exceptions include NullPointerException, ArrayIndexOutOfBoundsException, and ArithmeticException. It is not mandatory to handle unchecked exceptions explicitly, although it’s good practice to do so.
Checked Exception vs Unchecked Exception
Checked exceptions are those that are checked by the compiler for smooth program execution. Examples include HallTicketMissingException, PenNotWorkingException, and FileNotFoundException. If there is a chance of raising a checked exception in our program, we must handle it either by using try-catch blocks or by declaring it with the throws keyword; otherwise, we will get a compile-time error.
Unchecked exceptions, on the other hand, are not checked by the compiler for whether the programmer handles them or not. Examples include ArithmeticException and BomBlastException.
Note: Regardless of whether an exception is checked or unchecked, every exception occurs at runtime only; there is no chance of an exception occurring at compile time.
Note: RuntimeException and its child classes, as well as Error and its child classes, are unchecked. All other exceptions are checked.
Fully Checked vs Partially Checked
A checked exception is fully checked if all its child classes are also checked, e.g., IOException and InterruptedException. A checked exception is partially checked if only some of its child classes are unchecked, such as Exception and Throwable.
Exception Behavior Description:
IOException: Checked (fully)
RuntimeException: Unchecked
InterruptedException: Checked (fully)
Error: Unchecked
Throwable: Checked (partially)
ArithmeticException: Unchecked
NullPointerException: Unchecked
Exception: Checked (partially)
FileNotFoundException: Checked (fully)
Customized Exception Handling by Using Try-Catch
It is highly recommended to handle exceptions. The code that may raise an exception is called risky code, and we have to define that code inside the try block. The corresponding handling code should be defined inside the catch block.
In the first example without try-catch, the program encounters an ArithmeticException due to division by zero, leading to abnormal termination. In the second example with try-catch, the exception is caught, and the program continues executing the remaining statements, resulting in normal termination.
Case 1: If there is no exception, then output will be (1, 2, 3, 5, Normal Termination).
Case 2: If an exception is raised at statement 2 and the corresponding catch block is matched, then the output will be (1, 4, 5, Normal Termination).
Case 3: If an exception is raised at statement 2 and the corresponding catch block is not matched, then the output will be (1, Abnormal Termination).
Case 4: If an exception is raised at statement 4 or statement 5, then it is always an abnormal termination.
Note:
Within the try block, if an exception is raised anywhere, then the rest of the try block won’t be executed, even if we handle that exception. Therefore, within a try block, we should only place risky code, and the length of the try block should be as short as possible.
In addition to the try block, there may be a chance of an exception occurring inside catch and finally blocks. If any statement outside the try block raises an exception, then it always results in an abnormal termination.
Methods to Print Exception Information
The Throwable class defines the following methods to print exception information:
printStackTrace(): This method prints the exception’s name, description, and stack trace, showing the sequence of method calls that led to the exception.
Printable Format:
Java
NameOfException:DescriptionStackTrace
toString(): Returns a string representation of the exception, including its name and description.
Java
Name of Exception:Description
getMessage(): Returns the description of the exception.
Java
Description
Example Program:
Java
classTest {publicstaticvoidmain(String[] args) {try {System.out.println(10/0); } catch(ArithmeticExceptione) {e.printStackTrace(); // java.lang.ArithmeticException: / by zero at Test.main(Test.java:4)System.out.println(e); // java.lang.ArithmeticException: / by zeroSystem.out.println(e.toString()); // java.lang.ArithmeticException: / by zeroSystem.out.println(e.getMessage()); // / by zero } }}
Note: Internally, the default exception handler will use printStackTrace() to print stack trace information to the console.
try with multiple catch blocks
The way of handling an exception varies from exception to exception; hence, for every exception type, it is highly recommended to use a separate catch block. Using try with multiple catch blocks is always possible and recommended.
Worst Programming Practice:
Java
try {// Risky code} catch(Exceptione) {// For all exceptions, use this single catch block}
Best Programming Practice:
Java
try {// Risky Code} catch(ArithmeticExceptione) {// Perform alternative arithmetic operation} catch(SqlExceptione) {// Use MySQL DB instead of Oracle DB} catch(FileNotFoundExceptione) {// Use a local file instead of a remote file} catch(Exceptione) {// Default exception handling}
In the best programming practice approach, each specific exception type is caught and handled appropriately, providing a more tailored and robust error-handling mechanism.
Important Loopholes
Some important loopholes in exception handling:
1. Order of catch blocks matters
If a try with multiple catch blocks is present, then the order of catch blocks is very important. We have to take the child exceptions first and then the parent exceptions; otherwise, we will get a compile-time error saying “Exception XXX has already been caught”.
We cannot declare two catch blocks for the same exception; otherwise, we will get a compile-time error.
Java
try {// Risky code} catch (ArithmeticExceptione) {// Catch block for ArithmeticException} catch (ArithmeticExceptione) {// Another catch block for ArithmeticException (Duplicate)}// CE: Exception java.lang.ArithmeticException has already been caught
The correct code should be:
Java
try {// Risky code} catch (ArithmeticExceptione) {// Catch block for ArithmeticException} catch (Exceptione) {// Catch block for other exceptions}
These loopholes highlight the importance of careful handling and structuring of catch blocks to ensure effective exception management in Java programs.
Difference between final, finally, and finalize
final
final is a modifier applicable for classes, methods, and variables.
If a class is declared as final, then we can’t extend that class, meaning we can’t create a child class for that class. Inheritance is not possible for final classes.
If a method is final, then we can’t override that method in the child class. If a variable is declared as final, then we can’t perform reassignment for that variable.
finally
finally is a block always associated with try-catch to maintain clean-up code.
The speciality of the finally block is that it will be executed always, irrespective of whether an exception is raised or not, and whether it is handled or not.
It is responsible for performing clean-up activities related to the try block. Any resources opened as part of the try block will be closed inside the finally block.
finalize() is a method always invoked by the garbage collector just before destroying an object to perform clean-up activities.
Once the finalize() method completes, immediately the garbage collector destroys that object.
It is responsible for performing clean-up activities related to the object. Any resources associated with the object will be deallocated before destroying the object using the finalize() method.
Note:
The finally block is responsible for performing clean-up activities related to the try block. Any resources opened as part of the try block will be closed inside the finally block. It ensures that resources are released regardless of whether an exception occurs or not.
On the other hand, the finalize() method is responsible for performing clean-up activities related to the object. It is invoked by the garbage collector just before destroying an object. Any resources associated with the object will be deallocated before destroying the object using the finalize() method. This method is used to release resources held by the object and perform any necessary clean-up actions before the object is removed from memory.
In short, final is used to denote immutability or prevent extension/overriding, finally is used for try-catch cleanup, and finalize() is used for object-specific cleanup just before garbage collection. Each serves a distinct purpose in Java programming.
Various Possible Combinations of Try-Catch-Finally
Rules:
Order is Important: In try-catch-finally, the order is important, the order should be try, catch, and then finally.
Compulsory Usage: Whenever we write try, it’s compulsory to include either catch or finally; otherwise, we will get a compile-time error (try without catch or finally is invalid).
Compulsory Try Block for Catch: Whenever we write a catch block, a try block must be present; catch without try is invalid.
Compulsory Try Block for Finally: Whenever we write a finally block, a try block should be present; finally without try is invalid.
Nesting of Try-Catch-Finally: Inside try-catch-finally blocks, we can nest additional try-catch-finally blocks; nesting of try-catch-finally is allowed.
Curly Braces Requirement: Curly braces ({}) are mandatory for try-catch-finally blocks.
These examples demonstrate the various valid and invalid combinations of try-catch-finally blocks according to the specified rules.
Throw (throw keyword)
The throw keyword is used to manually create and throw an exception object. Sometimes, we need to generate exception objects explicitly and hand them over to the JVM
Java
thrownewArithmeticException("/ by zero");
Here, we are creating an ArithmeticException object explicitly and throwing it using the throw keyword.
The primary purpose of the throw keyword is to hand over our created exception object to the JVM manually.
The result of the following two programs is exactly the same:
Exception in thread "main"java.lang.ArithmeticException: / by zero at Test.main(Test.java:3)
In this case, the main method is responsible for causing the ArithmeticException and implicitly hands over the exception object to the JVM.
With throw keyword:
Java
classTest {publicstaticvoidmain(String[] args) {thrownewArithmeticException("/ by zero"); }}
Output:
Java
Exception in thread "main"java.lang.ArithmeticException: / by zero at Test.main(Test.java:3)
In this case, the programmer is explicitly creating an AE (ArithmeticException) object and throwing it using the throw keyword. This means the programmer is manually handing over the exception object to the JVM.
Note: The best use of the throw keyword is for user-defined or customized exceptions.
Here, e is a static member variable of type AE, but it is not initialized explicitly. Therefore, it defaults to null. When attempting to throw e, a NullPointerException occurs.
So, the throw keyword is used to explicitly throw exceptions, which can be either built-in or user-defined. However, it’s crucial to ensure that the exception object being thrown is properly initialized to avoid runtime errors like NullPointerException.
Unreachable Statement after throw:
After a throw statement, any code that follows it becomes unreachable because the exception is immediately thrown, and the program flow doesn’t continue beyond that point.
Java
classTest {publicstaticvoidmain(String[] args) {System.out.println(10/0);System.out.println("Hello"); }}//Runtime Error: AE: / by zeroException in thread "main"java.lang.ArithmeticException: / by zero at Test.main(Test.java:3)
Unreachable Statement Error:
If there are statements after a throw statement, they are considered unreachable, and the compiler raises a compile-time error indicating “unreachable statement”.
Java
classTest {publicstaticvoidmain(String[] args) {thrownewAE("/ by zero");System.out.println("Hello"); }}//Compile Error: unreachable statement
Usage of throw with Non-Throwable Types:
The throw keyword is only applicable for throwable types (subclasses of Throwable). If we attempt to use it with non-throwable types, it results in a compile-time error due to incompatible types.
If a class extends RuntimeException, it becomes a throwable type, and instances of this class can be thrown using the throw keyword.
Java
classTestextendsRuntimeException {publicstaticvoidmain(String[] args) {thrownewTest(); }}// runtime exceptionException in thread "main"Test at Test.main(Test.java:3)
In this example, Test is a subclass of RuntimeException, so it can be thrown without any compilation errors. When executed, it results in a runtime exception.
Throws (throws keyword)
In Java, if there is a possibility of a checked exception being thrown within a method, the method must either handle the exception using a try-catch block or declare that it throws the exception using the throws keyword in its method signature.
Java
import java.io.*;classTest {publicstaticvoidmain(String[] args) {PrintWriterpw = newPrintWriter("abc.txt");pw.println("Hello"); }}//Compilation Error: unreported exception java.io.FileNotFoundException; must be caught or declared to be thrown
Here, the PrintWriter constructor throws a checked exception FileNotFoundException, which is not handled. Hence, a compilation error occurs.
To handle this error, you can either use a try-catch block to handle the exception:
Using the throws keyword delegates the responsibility of handling the exception to the caller of the method.
Note: It’s recommended to handle exceptions using try-catch blocks where possible, as using throws may propagate exceptions up the call stack without handling them properly.
Let’s take one more example,
Java
classTest {publicstaticvoidmain(String[] args) {Thread.sleep(10000); }}//CE: unreported exception java.lang.InterruptedException must be caught or declared to be thrown
The Thread.sleep() method may throw an InterruptedException, which is a checked exception. Since this exception is not handled within the main method, nor is it declared using the throws keyword in the method signature, the compiler raises a compilation error:
Here, we can declare that the method throws the exception using the throws keyword:
Using the throws keyword delegates the responsibility of handling the exception to the caller of the method. However, it’s important to handle exceptions properly to prevent unexpected behavior in the program.
By Using throws Keyword
The throws keyword in Java is used in method declarations to indicate that a method might throw certain exceptions during its execution. This alerts the caller of the method that they need to handle or propagate these exceptions.
In this example, the main method uses the throws keyword to declare that it might throw an InterruptedException during its execution. This means that if the sleep() method within main throws an InterruptedException, the caller of main (in this case, the JVM) must handle it.
Similarly, if a method calls another method that declares a checked exception using throws, the calling method must either handle that exception or re-throw it using throws.
In this example, the doStuff method declares that it might throw an IE (hypothetical checked exception). Consequently, since main calls doStuff, it must either handle IE or declare that it throws it, as it propagates up the call stack from doMoreStuff().
However, it’s important to note that using throws doesn’t prevent abnormal termination of the program; it’s merely a way to indicate potential exceptions that might be thrown. Additionally, throws is only required for checked exceptions; there’s no impact or requirement for unchecked exceptions.
throws clause
The throws clause can be used to delegate the responsibility of exception handling to the caller, whether it is a method or the JVM.
It is required only for checked exceptions; the usage of the throws keyword for unchecked exceptions has no impact.
It is required only to convince the compiler; the usage of throws does not prevent the abnormal termination of the program.
Note: It is recommended to use try-catch blocks over the throws keyword.
Case 1: We can use the throws keyword for methods and constructors, but not for classes.
Case 2: We can use the throws keyword only for throwable types. If we try to use it for normal Java classes, we will get a compile-time error saying ‘incompatible types.
classTest{publicstaticvoidmain(String[] args) {thrownewException(); // Exception is checked }}//Compile Error: Unreported exception java.lang.Exception; must be caught or declared to be thrown
Java
classTest{publicstaticvoidmain(String[] args) {thrownewError(); // Error is unchecked }}//Runtime Error: Exception in thread "main"java.lang.Errorat Test.main(Test.java:lineNumber)
Case 4: Within a try block, if there is no chance of raising an exception, then we can’t write a catch block for that exception; otherwise, we will get a compilation error: ‘Exception XXX is never thrown in the body of the corresponding try statement.’ However, this rule is applicable only for fully checked exceptions.
classTest{publicstaticvoidmain(String[] args) {try {System.out.println("Hello"); }catch(AEe) // AE is unchecked exception { } }}/////////////////////////////////////////////////classTest{publicstaticvoidmain(String[] args) {try {System.out.println("hello"); }catch(Exceptione) // partially checked { } }}//////////////////////////////////////////////////////////////////classTest{publicstaticvoidm(String[] args) {try {System.out.println("Hello"); }catch(Errore) // unchecked { } }}// Output - hello (in all three cases)
Java
import java.io.*;classTest{publicstaticvoidmain(String[] args) {try {System.out.println("Hello"); }catch(IOExceptione) // Fully checked { } }}// Compile Error: Exception java.io.IOException is never thrown in the body of the corresponding try statement////////////////////////////////////////////////////////////////////////////////////////classTest{publicstaticvoidmain(String[] args) {try {System.out.println("Hello"); }catch(InterruptedExceptione) // fully checked { } }}// Compile Error: Exception java.lang.InterruptedException is never thrown in the body of the corresponding try statement
Customized or user-defined exception
Sometimes, to meet programming requirements, we can define our own exceptions. Such types of exceptions are called customized or user-defined exceptions, e.g., TooYoungException, TooOldException, InsufficientFundsException, etc.
Java
classTooYoungExceptionextendsRuntimeException {TooYoungException(Strings) {super(s); }}classTooOldExceptionextendsRuntimeException {TooOldException(Strings) {super(s); }}classCustomizedExceptionDemo {publicstaticvoidmain(String[] args) {intage = Integer.parseInt(args[0]);if (age > 60) {thrownewTooYoungException("Please wait some more time; definitely, you will get the best match."); } elseif (age < 18) {thrownewTooOldException("Your age already crossed the marriage age, and there is no chance of getting married."); } else {System.out.println("You will get match details soon by email!"); } }}
Note:
The throw keyword is best suitable for user-defined or customized exceptions but not for predefined exceptions.
It is highly recommended to define customized exceptions as unchecked, i.e., we have to extend RuntimeException but not Exception.
In the example, why is super(s) required?
super(s) is required to pass the description message to the superclass constructor, making it available to the default exception handler.
Exception Handling CheatSheet
Exception handling keyword summary
try –> Used to encapsulate risky code.
catch –> Used to handle exceptions.
finally –> Used to execute cleanup code.
throw –> Used to manually pass our created exception object to the JVM.
throws –> Used to delegate the responsibility of exception handling to the caller.
Various possible compiler errors in exception handling
“Unreported exception XXX; must be caught or declared to be thrown.”
“Exception XXX has already been caught.”
“Exception XXX is never thrown in the body of the corresponding try statement.”
Exceptions in Java are divided into two categories based on the entity raising them:
JVM Exception: These exceptions are raised automatically by the JVM whenever particular events occur. Examples include ArithmeticException and NullPointerException.
Programmatic Exception: These exceptions are raised explicitly, either by the programmer or by API developers, to indicate that something has gone wrong. Examples include TooOldException and IllegalArgumentException.
Top Ten Exceptions
1.ArrayIndexOutOfBoundsException: This exception is a subclass of RuntimeException and hence is unchecked. It is automatically raised by the JVM when attempting to access an array element with an index that is out of the array’s range. For example:
Java
int[] x = newint[4]; // Indices 0 to 3System.out.println(x[0]); // 0System.out.println(x[10]); // Throws ArrayIndexOutOfBoundsExceptionSystem.out.println(x[-10]); // Throws ArrayIndexOutOfBoundsException
2.NullPointerException: This exception is a subclass of RuntimeException and hence is unchecked. It is automatically raised by the JVM when attempting to perform any operation on a null object. For example:
3.ClassCastException: This exception is a subclass of RuntimeException and hence is unchecked. It is automatically raised by the JVM when attempting to type cast a parent object to a child type.
4.StackOverflowError: This error is a subclass of Error and hence is unchecked. It is automatically raised by the JVM when attempting to perform a recursive method call.
The recursive method calls in this code snippet would lead to a StackOverflowError:
Java
. . . .------------------m2()------------------m1()------------------m2()------------------m1()------------------main()------------------// RE : StackOverFlowError
5. NoClassDefFoundError: This error is a subclass of Error and hence is unchecked. It is automatically raised by the JVM when it is unable to find the required .class file. For example, if the Test.class file is not available, then attempting to run java Test will result in a runtime exception with the message: “NoClassDefFoundError: Test”.
6. ExceptionInInitializerError: This error is a subclass of Error and hence is unchecked. It is automatically raised by the JVM if any exception occurs while executing static variable assignments or static blocks.
Java
classTest {staticintx = 10/0;}// ExceptionInInitializerError caused by java.lang.ArithmeticException: / by zero
7. IllegalArgumentException: This exception is a subclass of RuntimeException and hence is unchecked. It is raised explicitly either by the programmer or by API developers to indicate that a method has been invoked with an illegal argument. For example, suppose the valid range of thread priority is 1 to 10. If we attempt to set the priority with any other value, then we will get a RuntimeException with the message “IllegalArgumentException”
8.NumberFormatException: This exception is a direct subclass of IllegalArgumentException and indirectly a subclass of RuntimeException, making it unchecked. It is raised explicitly either by the programmer or API developers to indicate that a conversion from a string to a number has been attempted, but the string is not properly formatted.
Java
inti = Integer.parseInt("10"); // Validinti = Integer.parseInt("ten"); // Throws NumberFormatException
9.IllegalStateException: This exception is a subclass of RuntimeException and hence is unchecked. It is raised explicitly either by the programmer or API developers to indicate that a method has been invoked at the wrong time. For example, after starting a thread, we are not allowed to restart the same thread once again; otherwise, we will get a RuntimeException with the message “IllegalThreadStateException”.
Java
// Example of IllegalThreadStateExceptionThreadt = newThread();t.start();t.start(); // Throws IllegalStateException
10. AssertionError: This exception is a subclass of Error and hence is unchecked. It is raised explicitly by the programmer or by API developers to indicate that an assert statement fails. For example, if the condition x > 10 is not met, then we will get a RuntimeException with the message “AssertionError”.
Java
// Example of AssertionErrorintx = 5;assert(x > 10); // Throws AssertionError
Raised by:
Raised automatically by the JVM and hence these are JVM exceptions:
ArrayIndexOutOfBoundsException
NullPointerException
ClassCastException
StackOverflowError
NoClassDefFoundError
ExceptionInInitializerError
Raised explicitly either by the programmer or by API developer and hence these are programmatic exceptions:
IllegalArgumentException
NumberFormatException
IllegalStateException
AssertionError
Java 1.7v Enhancements in Exception Handling
As part of version 1.7, two concepts were introduced for exception handling:
Try-with-Resources
Multi-catch block
Until version 1.6, it was highly recommended to write a finally block to close resources that were opened as part of a try block:
Java
try { br = newBufferedReader(newFileReader("input.txt"));// Use br based on our requirement} catch (IOExceptione) {// Handling code} finally {if (br != null) {br.close(); }}
The problems with the above approach are:
The programmer is required to close resources inside the finally block, increasing the complexity of programming.
The finally block is mandatory, which increases the length of the code and reduces readability.
To overcome these problems, the developers introduced try-with-resources in version 1.7.
The main advantage of try-with-resources is that whatever resources we open as part of the try block will be closed automatically once control reaches the end of the try block, either normally or abnormally. Therefore, explicit closure is not required, reducing the complexity of programming. Additionally, there’s no need to write a finally block, which reduces the length of the code and improves readability.
1.7v Try with Resources
In version 1.7, the try-with-resources statement was introduced:
Java
try (BRbr = newBR(newFR("input.txt"))) {// Use br based on our requirement // br will be closed automatically once control reaches the end of the try block, either normally // or abnormally, and we are not responsible for closing it explicitly} catch (IOExceptione) {// Handling code}
In this syntax, resources declared within the parentheses after the try keyword are automatically closed when the try block exits, whether normally or due to an exception. This feature simplifies resource management and improves code readability.
Try with Multiple Resources (1.7v)
In version 1.7, it became possible to declare and manage multiple resources in a single try-with-resources statement. These resources should be separated with semicolons:
All resources specified within the try-with-resources statement must implement the AutoCloseable interface. A resource is considered AutoCloseable if the corresponding class implements the java.lang.AutoCloseable interface.
It’s worth noting that many I/O-related, database-related, and network-related resources already implement the AutoCloseable interface. As programmers, we are not required to do anything specific other than being aware of this feature.
AutoCloseable Interface in Java 1.7v
The AutoCloseable interface was introduced in version 1.7 and contains only one method: close().
All resource variables declared within a try-with-resources statement are implicitly final. Therefore, within a try block, we cannot perform reassignment to these variables; otherwise, we will encounter a compilation error. For example:
AutoCloseable resource 'br' may not be assigned.BufferedReaderbr = newBufferedReader(newFileReader("Output.txt"));
Until version 1.6, the try block should be associated with either catch or finally. However, from version 1.7 onward, we can use try-with-resources without explicitly specifying catch or finally blocks:
Java
try (R) {// Code block}
The main advantage of try-with-resources is that we are not required to write a finally block explicitly because we do not need to close resources explicitly. Therefore, until version 1.6, the finally block was considered essential, but from version 1.7 onward, it becomes unnecessary.
Multi-Catch Block
Until version 1.6, even if multiple different exceptions required the same handling code, separate catch blocks had to be written for each exception type. This approach increased the length of the code and reduced readability:
To overcome this problem, the developers introduced the multi-catch block in version 1.7. With multi-catch blocks, a single catch block can handle multiple different types of exceptions:
In the above example, whether the raised exception is AE or NPE, the same catch block can handle it.
In a multi-catch block, there should not be any relation between exception types, such as child-to-parent, parent-to-child, or same type; otherwise, a compilation error will occur.
Java
try {// Code block} catch (AE | Exceptione) { e.printStackTrace();}//Compilation Error: Alternatives in a multi-catch statement cannot be related by subclassing.
Exception Propagation: Inside a method, if an exception is raised and not handled, the handling will be propagated to the caller. Then, the caller method is responsible for handling the exception. This process is called exception propagation.
Re-throwing Exception: This approach is used to convert one exception to another exception.
Exception handling is an essential aspect of Java programming that enables developers to write robust and reliable code. By understanding the different types of exceptions, using try-catch blocks effectively, and following best practices for exception handling, you can create Java applications that gracefully handle errors and unexpected situations. Remember to handle exceptions appropriately, provide informative error messages, and clean up resources properly to ensure the reliability and maintainability of your code.
Multithreading is a programming concept that enables simultaneous execution of multiple threads within a process, allowing for better resource utilization and improved performance. In recent years, advancements in hardware and software technologies have led to significant enhancements in multithreading techniques. In this blog, we’ll delve into the latest multithreading enhancements and their implications for software development.
Thread Group: First Multithreading Enhancements
Based on functionality, we can group a thread into a single unit, which is nothing but a thread group. That is, a thread group contains a group of threads. In addition to threads, a thread group also contains sub-thread groups.
The main advantage of maintaining threads in the form of thread groups is that we can perform common operations easily.
Every thread in Java belongs to some group. The main thread belongs to the main group. Every thread group in Java is a child group of the system group, either directly or indirectly. Hence, the system group acts as the root for all thread groups in Java. The system group contains several system-level threads like finalizer, reference handler, signal dispatcher, and attach listener.
Java
classTest {publicstaticvoidmain(String[] args) {System.out.println(Thread.currentThread().getThreadGroup().getName()); // Output: main System.out.println(Thread.currentThread().getThreadGroup().getParent().getName()); // System// main thread main ThreadGroup System TG System }}
Here, “main thread” belongs to “main ThreadGroup,” and “System ThreadGroup” belongs to “System.”
Thread Group Constructors
ThreadGroup is a Java class present in the java.lang package and is a direct child class of Object.
Constructors:
Java
ThreadGroupg = newThreadGroup(String gName);// Example usage:ThreadGroupg = newThreadGroup("First Group");
This constructor creates a new thread group with the specified name. The parent of this new group is the thread group of the currently executing thread.
Java
ThreadGroupg = newThreadGroup(ThreadGroup tg, String groupName);// Example usage:ThreadGroupg1 = newThreadGroup("First Group")System.out.println(g1.getParent().getName()); // mainThreadGroupg2 = newThreadGroup(g1, "Second Group");System.out.println(g2.getParent().getName()); // First Group
This constructor creates a new thread group with the specified group name. The parent of this new thread group is the specified parent group.
In the above example, the parent group for “Second Group” is explicitly set to “First Group,” demonstrating the flexibility of this constructor.
Thread Group Methods
Various methods present in ThreadGroup:
String getName(): Returns the name of the ThreadGroup.
int getMaxPriority(): Retrieves the maximum priority of the thread group.
void setMaxPriority(int priority): Sets the maximum priority of the thread group. The default maximum priority is 10. Threads in the thread group that already have a higher priority won’t be affected, but for newly added threads, this maximum priority is applicable.
ThreadGroup getParent(): Returns the parent group of the current thread group.
void list(): Prints information about the thread group to the console.
int activeCount(): Returns the number of active threads present in the thread group.
int activeGroupCount(): Returns the number of active thread groups present within the current thread group.
int enumerate(Thread[] t): Copies all active threads of this thread group into the provided Thread[] array. This includes threads from sub-thread groups.
int enumerate(ThreadGroup[] g): Copies all active sub-thread groups into a ThreadGroup[] array.
boolean isDaemon(): Checks whether the thread group is a Daemon or not.
void setDaemon(boolean daemon): Sets the Daemon nature of the current thread group.
void interrupt(): Interrupts all waiting or sleeping threads present in the thread group.
void destroy(): Destroys the thread group and its sub-thread groups.
java.util.concurrent package
Problems with traditional synchronized keyword
Let’s first see the problems with the traditional synchronized keyword, then we will look at enhancements
The problems with the traditional synchronized keyword are:
Lack of Flexibility: There is no flexibility to attempt for locks without waiting.
Absence of Time Constraints: There is no way to specify the maximum waiting time for a thread to acquire a lock. Threads may wait indefinitely for a lock, potentially leading to performance problems or deadlock situations.
Lack of Control Over Lock Acquisition: When a thread releases a lock, there is no control over which waiting thread will acquire that lock next.
No API for Listing Waiting Threads: There is no API available to list out all waiting threads for a lock.
Limitation in Usage Scope: The synchronized keyword must be used either at the method level or within a method, and it’s not possible to use it across multiple methods.
To address these issues, the creators introduced java.util.concurrent.locks in version 1.5. This package provides several enhancements to programmers, offering more control over concurrency.
The Lock interface
Lock object is similar to the implicit lock acquired by a thread to execute a synchronized method or synchronized block. Lock implementations provide more extensive operations than traditional implicit locks.
Important methods of the Lock interface:
Java
voidlock()
We can use this method to acquire a lock. If the lock is already available, then the current thread will immediately get that lock. If the lock is not available, then it will wait until obtaining the lock. It exhibits the same behavior as the traditional synchronized keyword.
Java
booleantryLock()
Attempts to acquire the lock without waiting. If the lock is available, the thread acquires the lock and returns true. If the lock is not available, the method returns false, and the thread can continue its execution without waiting. In this case, the thread never enters a waiting state.
Java
if (lock.tryLock()) {// Perform safe operations} else {// Perform alternative operations}
boolean tryLock(long time, TimeUnit unit)
Java
booleantryLock(long time, TimeUnit unit)
If the lock is available, the thread will get the lock and continue its execution. If the lock is not available, then the thread will wait until the specified amount of time. If the lock is still not available after the specified time, the thread can continue its execution. TimeUnit: An enum present in the java.util.concurrent package.
Java
if (lock.tryLock(1000, TimeUnit.MILLISECONDS)) {// Perform safe operations}
void lockInterruptibly(): Acquires the lock if it is available and returns immediately. If the lock is not available, it will wait. While waiting, if the thread is interrupted, it won’t get the lock.
void unlock(): To call this method, the current thread should be the owner of the lock; otherwise, a runtime exception IllegalMonitorStateException will be thrown.
ReentrantLock
It is an implementation class of the Lock interface and is a direct child class of Object.
Reentrant means a thread can acquire the same lock multiple times without any issue. Internally, ReentrantLock increments the thread’s personal count whenever we call lock methods and decrements the count value whenever the thread calls the unlock method. The lock will be released whenever the count reaches zero.
Creates a ReentrantLock with the given fairness policy. If fairness is set to true, then the longest waiting thread will get the lock if it is available, following a first-come-first-serve policy. If fairness is set to false, the selection of the waiting thread is not guaranteed.
Which of the following declarations are equal?
ReentrantLock l = new ReentrantLock();
ReentrantLock l = new ReentrantLock(true);
ReentrantLock l = new ReentrantLock(false);
The first and third declarations are equal.
Important methods of ReentrantLock:
(Comes from the Lock interface)
void lock()
boolean tryLock()
boolean tryLock(long l, TimeUnit t)
void lockInterruptibly()
void unlock()
Extra methods:
int getHoldCount(): Returns the number of holds on this lock by the current thread.
boolean isHeldByCurrentThread(): Returns true if the lock is held by the current thread.
int getQueueLength(): Returns the number of threads waiting for the lock.
Collection getQueuedThreads(): Returns a collection of threads that are waiting to acquire the lock.
boolean hasQueuedThreads(): Returns true if any thread is ready to acquire the lock.
boolean isLocked(): Returns true if the lock is acquired by the same thread.
boolean isFair(): Returns true if the fairness policy is set to true.
Thread getOwner(): Returns the thread that acquired the lock.
Thread Pools (Executor Frameworks)
Creating a new thread for every job may lead to performance and memory problems. To overcome this, we should use a thread pool. A thread pool is a pool of already created threads ready to execute our jobs. Java 1.5 introduced the Executor Framework to implement thread pools.
We can submit a Runnable job using the submit method, e.g., service.submit(job);.
Java
service.submit(job);.
We can shutdown the ExecutorService by using shutdown(), e.g., service.shutdown();.
Java
service.shutdown();.
Callable and Future
In the case of a Runnable job, the thread won’t return anything after completing its job. If a thread is required to return some result after execution, then we should use Callable. The Callable interface contains only one method, call(). If we submit a Callable object to the executor, then after completing the job, the thread returns an object of type Future. The Future object can be used to retrieve the result from the Callable job.
Runnable vs Callable
Runnable:
If a thread is not required to return anything after completing a job, then we should use Runnable.
The Runnable interface contains only one method, run().
A Runnable job does not return anything; hence, the return type of run() is void.
Within the run method, if there is any chance of a raised checked exception, we must handle it using try-catch because we can’t use the throws keyword with the run method.
The Runnable interface is present in the java.lang package.
Introduced in version 1.0.
Callable:
If a thread is required to return something after completing a job, then we should use Callable.
The Callable interface contains only one method, call().
A Callable job is required to return something, and hence the return type of the call method is object.
Inside the call method, if there is any chance of raising a checked exception, we are not required to handle it using try-catch because the call method already throws exceptions.
The Callable interface is present in the java.util.concurrent package.
Introduced in version 1.5.
ThreadLocal
The ThreadLocal class provides thread-local variables. The ThreadLocal class maintains values on a per-thread basis. Each ThreadLocal object maintains a separate value for each thread that accesses that object. Threads can access their local value, manipulate its value, and even remove its value. In every part of the code executed by a thread, we can access its variable.
For example, consider a servlet that invokes some business methods. We have a requirement to generate a unique transaction ID for every request, and we have to pass this transaction ID to the business methods. For this requirement, we can use ThreadLocal to maintain a separate transaction ID for every request, i.e., for every thread.
Once a thread enters into a dead state, all its local variables are by default eligible for garbage collection.
This allows for safe, efficient handling of thread-specific data without worrying about synchronization issues or interference from other threads. It ensures that each thread operates with its own set of data, isolated from other threads.
Conclusion
Multithreading enhancements are ongoing, driven by the need for faster, more responsive, and scalable software. These advancements improve concurrency, performance, debugging, and security, making multithreading a powerful and versatile tool for modern software development. As hardware and software continue to evolve, we can expect even more innovative and efficient multithreading techniques to emerge in the future.
In multi-threaded programming, communication between threads is essential for coordinating their activities and sharing data. Inter-thread communication in Java refers to the mechanisms through which threads communicate with each other to synchronize their actions or exchange data. This communication enables threads to work cooperatively towards achieving a common goal. In this blog, we will explore the concepts of inter-thread communication in Java, including synchronization, shared memory, and the wait-notify mechanism.
Why Inter-Thread Communication?
Consider a scenario where multiple threads are executing concurrently and need to coordinate their tasks or exchange information. For instance, one thread may produce data, while another consumes it. In such cases, inter-thread communication becomes necessary to ensure the correct execution and synchronization of threads.
Java provides several mechanisms for inter-thread communication, including synchronized blocks, wait-notify, and locks. These mechanisms allow threads to coordinate their activities effectively and avoid issues such as race conditions and deadlock.
Inter-thread communication
Two threads can communicate with each other by using wait(), notify(), and notifyAll(). The thread that is expecting an update is responsible for calling wait(), after which the thread enters a waiting state. The thread responsible for performing the update should call notify() after completing the update, allowing the waiting thread to receive the notification and continue its execution with the updated items.
Why are wait(), notify(), and notifyAll() present in the Object class but not in the Thread class? wait(), notify(), and notifyAll() are present in the Object class rather than the Thread class because a thread can call these methods on any Java object.
To call wait(), notify(), and notifyAll() methods on any object, the thread must be the owner of that object, meaning it must hold the lock of that object. Therefore, these methods can only be called from within a synchronized area; otherwise, a IllegalMonitorStateException will be thrown.
If a thread calls wait() on any object, it immediately releases the lock of that particular object and enters a waiting state.
If a thread calls notify() on any object, it releases the lock of that object, but it may not happen immediately. Unlike wait(), notify(), and notifyAll(), there are no other methods where a thread releases a lock.
Hence, in the cases of yield(), join(), and sleep(), the thread does not release the lock, but it does release the lock in wait(), notify(), and notifyAll().
Note: Every wait() method throws InterruptedException, which is a checked exception. Therefore, whenever we use the wait() method, we must handle this InterruptedException, either with a try-catch block or by using the throws keyword; otherwise, a compile-time error will occur.
What is impact of these methods on thread lifecycle?
The wait(), notify(), and notifyAll() methods in Java have a significant impact on the lifecycle of threads. These methods are primarily used for inter-thread communication and synchronization. Let’s explore their impact on the lifecycle of threads:
wait()
Impact on Thread Lifecycle: When a thread calls the wait() method, it voluntarily gives up its hold on the object’s monitor (lock) and enters into the “waiting” state. This means that the thread is temporarily suspended and does not consume CPU resources until one of the following conditions occurs:
Another thread calls notify() or notifyAll() on the same object.
The specified timeout period elapses (if wait(long timeout) is used).
Transition in Thread Lifecycle: From the “waiting” state, a thread can transition back to the “runnable” state (ready to run) once it is notified or the timeout expires.
Example Use Case: Typically used to wait for a specific condition to be met or for another thread to complete its task before proceeding.
notify()
Impact on Thread Lifecycle: The notify() method is used to wake up a single thread that is waiting on the object’s monitor. If multiple threads are waiting, the choice of which thread to wake up is arbitrary and depends on the JVM’s implementation.
Transition in Thread Lifecycle: When notify() is called, the awakened thread transitions from the “waiting” state to the “runnable” state. However, it does not immediately acquire the object’s monitor. It competes with other threads to obtain the lock once it becomes available.
Example Use Case: Often used to signal that a condition has changed and other threads waiting on that condition can proceed.
notifyAll()
Impact on Thread Lifecycle: Similar to notify(), but notifyAll() wakes up all threads that are waiting on the object’s monitor. All awakened threads transition to the “runnable” state and compete for the object’s monitor.
Transition in Thread Lifecycle: Threads waiting on the object’s monitor are awakened and transitioned to the “runnable” state. They compete for the lock once it becomes available.
Example Use Case: Useful when multiple threads are waiting for a shared resource to become available or when a significant change occurs that affects multiple threads.
Main thread calling wait methodChild thread starts calculation Child thread giving notification Main thread got notification 5050
Here,
The main thread starts and creates an instance of ThreadB and starts it.
Main thread enters a synchronized block with object b.
The main thread calls wait() on b, releasing the lock and going into a waiting state.
Meanwhile, ThreadB starts execution and calculates the sum.
After calculation, ThreadB calls notify(), waking up the waiting main thread.
The main thread resumes execution, prints “Main thread got notification”, and then prints the total sum calculated by ThreadB, which is 5050.
Producer-Consumer Problem
In the Producer-Consumer problem, the Producer thread is responsible for producing items to the queue, while the Consumer thread is responsible for consuming items from the queue. If the queue is empty, the Consumer thread calls the wait() method and enters a waiting state. After the Producer thread produces an item and adds it to the queue, it is responsible for calling notify(). This notification allows the waiting Consumer thread to resume execution and continue processing the updated items.
Java
// Producer thread is responsible for producing items to the queueclassProducerThread {voidproduce() {synchronized(q) {// Produce items to the queueq.notify(); } }}// Consumer thread is responsible for consuming items from the queueclassConsumerThread {voidconsume() {synchronized(q) {if (q.isEmpty())q.wait(); // If the queue is empty, consumer thread enters waiting stateelseconsumeItems(); // Consume items from the queue } }}
In the Producer-Consumer Problem, there are two threads: ProducerThread and ConsumerThread.
ProducerThread is responsible for producing items and ConsumerThread is responsible for consuming items.
When ProducerThread produces an item, it notifies any waiting ConsumerThread by calling notify() after synchronizing on the queue object.
ConsumerThread, after synchronizing on the queue object, checks if the queue is empty.
If the queue is empty, ConsumerThread calls wait(), entering a waiting state until notified by the ProducerThread.
Once ProducerThread produces an item and notifies, the waiting ConsumerThread gets the notification and continues its execution, consuming the items from the queue.
This process ensures that the ProducerThread and ConsumerThread synchronize their actions properly to avoid issues such as producing when the queue is full or consuming when the queue is empty.
Difference between notify() and notifyAll()
The notify() method is used to provide a notification to only one waiting thread. If multiple threads are waiting, only one thread will be notified, and the remaining threads have to wait for further notifications. Which thread will be notified cannot be predicted, as it depends on the JVM.
On the other hand, the notifyAll() method is used to provide notification to all waiting threads of a particular object. Even if multiple threads are notified, execution will be performed one by one because threads require a lock, and only one lock is available.
Using notify() and notifyAll() allows for controlling the synchronization and communication between threads effectively, depending on the specific requirements of the application.
Understanding Lock Acquisition in Synchronized Blocks with wait()
To unserstand this concept better, let’s first see blow eamples and then we will discuss it further.
Java
Stacks1 = newStack();Stacks2 = newStack();synchronized(s1) {// ...s2.wait(); // This will result in an IllegalMonitorStateException// ...}
We will encounter an IllegalMonitorStateException in the above case. This is because the wait() method is being called on object s2, but the synchronization block is held on object s1. Therefore, the thread does not have the lock for s2, leading to the illegal state exception.
However, in the second example:
Java
synchronized(s1) {// ...s1.wait(); // This is a valid usage// ...}
The above example is perfectly valid. Here, the wait() method is called on the same object s1 on which the synchronization block is held. Hence, the thread has the lock for s1, ensuring that the call to wait() is valid.
Note: When calling the wait() method on an object, the thread requires the lock of that particular object. For instance, if wait() is called on s1, the thread acquires the lock of s1 but not s2.
Deadlocks
Deadlocks occur primarily due to the use of the synchronized keyword, hence special care must be taken when using it. There are no resolution techniques for deadlocks, but several prevention techniques are available.
Java
classA {publicsynchronizedvoidd1(Bb) {System.out.println("Thread 1 starts execution of d1() method");try {Thread.sleep(6000); } catch(InterruptedExceptione) {}System.out.println("Thread 1 trying to call B's last()");b.last(); }publicsynchronizedvoidlast() {System.out.println("Inside A, this is last() method"); } }classB {publicsynchronizedvoidd2(Aa) {System.out.println("Thread 2 starts execution of d2() method");try {Thread.sleep(6000); } catch(InterruptedExceptione) {}System.out.println("Thread 2 trying to call A's last()");a.last(); }publicsynchronizedvoidlast() {System.out.println("Inside B, this is last() method"); }}classDeadLock1extendsThread {Aa = newA();Bb = newB();publicvoidm1() {this.start();a.d1(b); // This line executed by main thread }publicvoidrun() {b.d2(a); // this line executed by child thread }publicstaticvoidmain(String[] args) {DeadLock1d = newDeadLock1();d.m1(); }}
Output-
Java
Thread 1 starts execution of d1() methodThread 2 starts execution of d2() methodThread 2 trying to call A's last()Thread 1 trying to call B's last()
In the above program, if we remove any single synchronized keyword, then the program won’t enter into a deadlock. Therefore, the synchronized keyword is the only reason for the deadlock situation. Due to this, special care must be taken when using synchronized keywords.
Deadlock Versus Starvation
Deadlock occurs when threads are blocked forever because they are each waiting for a resource that the other thread holds. It results in a situation where no progress can be made.
On the other hand, starvation refers to a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. However, the waiting of the thread in starvation eventually ends at certain points.
For example, if a low priority thread has to wait until completing all high priority threads, it will experience long waiting but will eventually get a chance to execute, which is a form of starvation.
Daemon Threads
Daemon threads are those executing in the background without interfering with the termination of the main application. Examples of daemon threads include the Garbage Collector, Signal Dispatcher, and Attach Listener.
The main objective of daemon threads is to provide support for non-daemon threads (such as the main thread). For instance, if the main thread is running with low memory, the JVM may run the garbage collector to reclaim memory from unused objects. This action improves the amount of free memory, allowing the main thread to continue its execution smoothly.
Usually, daemon threads have low priority, but depending on our requirements, they can run with high priority as well.
We can check if a thread is a daemon thread by using the isDaemon() method of the Thread class:
Java
publicbooleanisDaemon()
Similarly, we can change the daemon nature of a thread using the setDaemon() method of the Thread class, but this change is only possible before starting the thread. If we attempt to change the daemon nature after starting the thread, we will encounter an IllegalThreadStateException.
Default Nature of Thread
The default nature of thread is such that the main thread is always non-daemon, while for all other threads, the daemon nature is inherited from their parent thread. This means that if the parent thread is a daemon, then the child thread will also be a daemon, and if the parent thread is non-daemon, then the child thread will also be non-daemon.
It is impossible to change the daemon nature of the main thread because it is already started by the JVM at the beginning.
For example:
Java
classMyThreadextendsThread {}classTest {publicstaticvoidmain(String[] args) {System.out.println(Thread.currentThread().isDaemon()); // false// Thread.currentThread().setDaemon(true); // This will result in IllegalThreadStateExceptionMyThreadt = newMyThread();System.out.println(t.isDaemon()); // falset.setDaemon(true);System.out.println(t.isDaemon()); // true }}
When the last non-daemon thread terminates, all daemon threads are automatically terminated regardless of their position.
For example:
Java
classMyThreadextendsThread {publicvoidrun() {for(inti = 0; i < 10; i++) {System.out.println("Child thread");try {Thread.sleep(2000); } catch(InterruptedExceptione) {} } }}classDaemonThreadDemo {publicstaticvoidmain(String[] args) {MyThreadt = newMyThread();t.setDaemon(true); // This makes the child thread a daemont.start();System.out.println("End of main thread"); }}
If we comment out line 1 (t.setDaemon(true);), then both the main and child threads are non-daemon, and both threads will execute until their completion. However, if line 1 is not commented, the main thread is non-daemon, and the child thread is daemon. In this case, when the main thread terminates, the child thread will also be terminated. The possible outputs in this case are:
Java
//output 1End of main threadChildThread//output 2End of main thread//output 3ChildThreadEnd of main thread```"
Multithreading Models
Java multithreading concepts are implemented using the following two models:
Green Thread Model: Threads managed entirely by the JVM without relying on underlying OS support are called green threads. Very few operating systems, such as SUN Solaris, provide support for the green thread model. However, the green thread model is deprecated and not recommended for use.
Native OS Model: Threads managed by the JVM with the assistance of the underlying operating system are referred to as the native OS model. All Windows-based operating systems provide support for the native OS model.
How to stop a thread?
If the stop() method is called, the thread immediately enters the dead state. However, the stop() method is deprecated and not recommended for use.
How to suspend and resume a thread?
The suspend() and resume() methods are used to suspend and resume threads, respectively. However, these methods are deprecated and not recommended for use.
When the suspend() method is called, the thread immediately enters the suspended state. A suspended thread can be resumed using the resume() method of the Thread class, allowing the suspended thread to continue its execution.
Part 5:
Conclusion
Inter-thread communication is essential in Java multi-threaded programming for coordinating the activities of concurrent threads. By using synchronization and mechanisms like the wait-notify protocol, threads can communicate effectively and synchronize their actions to avoid issues such as race conditions and deadlock. Understanding these concepts is crucial for writing thread-safe and efficient concurrent programs in Java.
In the realm of concurrent programming, synchronization plays a crucial role in ensuring thread safety and preventing race conditions. In Java, where multithreading is a fundamental feature, understanding synchronization mechanisms is essential for writing robust and efficient concurrent applications. In this comprehensive guide, we will delve into the intricacies of synchronization in Java threads, exploring its concepts, techniques, and best practices.
Synchronization in Java
Synchronization in Java is a critical concept for managing concurrent access to shared resources, preventing data inconsistency issues that can arise when multiple threads operate on the same data simultaneously.
“Synchronized”
The ‘synchronized’ keyword is the modifier applicable only to methods and blocks, but not to classes and variables. If multiple threads are trying to operate simultaneously, there may be a chance of a data inconsistency problem. To overcome this problem, we should use the “synchronized” keyword. If a method or block is declared as synchronized, then at any given time, only one thread is allowed to execute that method or block on the given object, resolving the data inconsistency problem.
The main advantage of the “synchronized” keyword is that we can resolve data inconsistency problems, but the main disadvantage is that it increases the waiting time of threads and creates performance problems. Hence, if there is no specific requirement, it is not recommended to use the “synchronized” keyword.
If a thread wants to execute a synchronized method on the given object, it first has to obtain the lock of that object. Once the thread has obtained the lock, it is allowed to execute any synchronized method on that object. Once the method execution is complete, the thread automatically releases the lock. Acquiring and releasing the lock are internally taken care of by the JVM, and the programmer is not responsible for this activity.
While a thread is executing a synchronized method on a given object, the remaining threads are not allowed to execute any synchronized methods simultaneously on the same object, but they are allowed to execute non-synchronized methods simultaneously.
If thread t1 comes to execute m1() on object x, then t1 only acquires the lock of object x and starts executing m1(). Now, if t2 comes to execute m1() or t3 comes to execute m2(), then in both situations, a waiting state will occur. However, if t4 comes to execute m3(), it will get a chance immediately.
The lock concept is implemented based on the object, not on the method. So, every Java object has two areas:
Non-synchronized area: This area can be accessed by any number of threads simultaneously, for example, wherever the object’s state won’t be changed, like a read() operation.
Synchronized area: This area can be accessed by only one thread at a time, for example, wherever we are performing update operations (Add/Remove/Delete/Replace), i.e., where the state of the object is changing.
Here, (d object).wish(“Krushna”) by t1 | .wish(“Arjuna”) by t2
In the SynchronizedDemo class, two threads t1 and t2 are created, both sharing the same Display object d. The wish() method of the Display class is synchronized, ensuring that only one thread can execute it at a time. This prevents data inconsistency issues.
Suppose,
If we do not declare the wish() method as synchronized, then both threads will be executed simultaneously, resulting in irregular output like below:
But, if we declare wish() as synchronized, then at a time only one thread is allowed to execute wish() on the given Display object, ensuring regular output.
As expected, each thread prints its name (“Krushna” for t1 and “Arjuna” for t2) ten times alternately, resulting in regular output due to the synchronization of the wish() method.
In this case study, eventhough the wish() method is synchronized, we will get irregular output because threads are operating on different Java objects.
Conclusion: If multiple threads are operating on the same Java object, then synchronization is required. If multiple threads are operating on multiple Java objects, then synchronization is not required.
Every class in Java has a unique lock, which is known as the class-level lock. If a thread wants to execute a static synchronized method, it requires the class-level lock. Once the thread acquires the class-level lock, it is allowed to execute any static synchronized method of that class. Once the method execution completes, the thread automatically releases the lock.
While a thread is executing a static synchronized method, the remaining threads are not allowed to execute any static synchronized method of that class simultaneously. However, the remaining threads are allowed to execute the following methods simultaneously:
(object X).m1()If thread t1 comes to execute m1(), then it acquires the class-level lock: t1 --> CL(X)If t2 comes to execute m1(), it will go into a waiting state.If t3 comes to execute m2(), it will go into a waiting state.If t4 comes to execute m3(), it will execute it.If t5 comes to execute m4(), it will execute it.If t6 comes to execute m5(), it will execute it.
Synchronized Block
If only a few lines of code require synchronization, it is not recommended to declare the entire method as synchronized. Instead, we can enclose those few lines of code using a synchronized block.
The main advantage of a synchronized block over a synchronized method is that it reduces the waiting time of threads and improves the performance of the system.
We can declare synchronized blocks as follows:
To get the lock of the current object:
Java
synchronized(this) {// If a thread gets the lock of the current object, then only it is allowed to execute this area}
To get the lock of a particular object ‘b’:
Java
synchronized(b) {// If a thread gets the lock of the particular object 'b', then only it is allowed to execute this area}
To get the class-level lock:
Java
synchronized(Display.class) {// If a thread gets the class-level lock of the "Display" class, then only it is allowed to execute this area}
Using synchronized blocks allows for finer-grained control over synchronization, focusing only on the critical sections of code that require it, thus improving the overall performance of the system.
In the Display class, the wish() method is defined with extensive code execution before and after the synchronized block. Inside the synchronized block, a loop prints “Good Morning” along with the provided name, ensuring thread-safe execution of this critical section of code.
In the MyThread class, threads are created to execute the wish() method of the Display object with provided names.
In the SynchronizedDemo class, Display object d is created, and two threads (t1 and t2) are instantiated to execute the wish() method on d with different names concurrently.
Lock Concept
Lock concept is applicable for object type and class type, but not for primitives. Hence, we can’t pass a primitive type as an argument to a synchronized block; otherwise, we will get a compilation error: “unexpected type found: int required: reference.”
Can a thread acquire multiple locks simultaneously?
Yes, of course, from different objects. For example:
Java
classX {publicsynchronizedvoidm1() {// Here, the thread has the lock of the 'X' objectYy = newY();synchronized (y) {// Here, the thread has locks of x and yZz = newZ();synchronized (z) {// Here, the thread has locks of x, y, and z } } }}Xx = newX();x.m1();
In this example, while executing method m1() of object x of class X, the thread acquires locks from objects of classes X, Y, and Z simultaneously.
Synchronized Statement
A synchronized statement refers to the set of statements enclosed within a synchronized method or synchronized block. These statements are termed synchronized statements because they are executed under the protection of synchronization, ensuring thread-safe access to critical sections of code.
In a synchronized method, the entire method body is considered a synchronized statement, as all statements within it are executed atomically, allowing only one thread to execute the method at any given time.
Similarly, in a synchronized block, the statements enclosed within the block are synchronized statements. These statements are executed under the lock associated with the specified object or class, ensuring mutual exclusion among threads attempting to access the synchronized block concurrently.
Overall, synchronized statements play a crucial role in achieving thread safety by ensuring that critical sections of code are executed in a mutually exclusive manner, thereby preventing data races and maintaining program correctness.
Synchronization is a fundamental aspect of Java concurrency, enabling safe and efficient coordination among multiple threads. By mastering synchronization techniques and understanding the underlying principles, developers can build robust and scalable concurrent applications. With this comprehensive guide, you’re equipped to navigate the complexities of synchronization in Java threads and harness the power of concurrent programming effectively.
In the realm of software development, multithreading emerges as a powerful tool for building applications that can execute multiple tasks seemingly concurrently, enhancing responsiveness and performance. Java, a widely used language, offers robust support for multithreading through its Thread class and Runnable interface. Multithreading in Java is a powerful feature that allows concurrent execution of multiple threads within the same process. It’s crucial for building scalable, responsive, and efficient applications, especially in today’s world where multi-core processors are prevalent. In this comprehensive guide, we will delve into the depths of Java multithreading, covering everything from the basics to advanced concepts.
What is Multithreading in Java?
Before understanding multithreading, we first need to understand multitasking. Let’s see what it is.
Multitasking: Executing several tasks simultaneously is the concept of multitasking. There are two types of multitasking:
Process-based: Executing several tasks simultaneously, where each task is a separate independent program (Process), is called process-based multitasking. For example, while typing a Java program in an editor, we can listen to audio songs from the same system, simultaneously download a file from the internet. All these tasks will be executed simultaneously and independently of each other; hence, it is process-based multitasking. Process-based multitasking is best suited at the OS level.
Thread-based: Executing several tasks simultaneously, where each task is a separate independent part of the same program, is called thread-based multitasking. Each independent part is called a thread. Thread-based multitasking is best suited at the program level. For example, developing multimedia graphics, animation, video games, web server, and application server, etc.
Whether it is process-based or thread-based, the main objective of multitasking is to reduce the response time of the system and improve performance.
Now, What is Multithreading?
Multithreading refers to the concurrent execution of multiple threads within a single process. Each thread represents an independent flow of control, allowing programs to perform multiple tasks simultaneously.
BTW, What is Concurrency: The ability of tasks to appear to execute simultaneously, even on single-core processors.
Why Multithreading?
Multithreading enables applications to utilize the available CPU resources efficiently, especially on multi-core processors. It improves responsiveness by allowing tasks to run in the background while the main application thread remains responsive.
Threads vs. Processes
A thread is a lightweight process, and multiple threads can exist within a single process. Threads share the same memory space, making communication between them more efficient compared to processes, which have separate memory spaces.
Defining a Thread
We can define a thread by:
By extending the Thread class
By implementing the Runnable Interface
Let’s take a closer look at each one.
By Extending the Thread class
To define a thread by extending the Thread class, a new class can be created that extends the Thread class. Here’s an example:
Java
classMyThreadextendsThread// This way we can define a thread by extending the Thread class {publicvoidrun() {for(inti=0; i<10; i++) // Inside the run method, whatever is there is the job of the thread. Here, this for loop is the job of this child thread and it is executed independently. {System.out.println("child thread"); } }}
In this method, the class MyThread extends the Thread class, enabling it to define a thread. The run() method within this class signifies the task to be executed by the thread. In this example, the loop within the run() method represents the specific job to be performed by the child thread.
Thread Execution and Thread Scheduler
Let’s see below example first.
Java
classThreadDemo {publicstaticvoidmain(String[] args) {// At this point, only one thread is executing, and that is the main thread.MyThreadt = newMyThread(); // Thread instantiationt.start(); // Starting the thread// At this point, two threads are executing simultaneously: one is the main thread, and the other is the child thread (t - MyThread).for(inti=0; i<10; i++) // This for loop is executed by the main thread and not by its child thread. {System.out.println("main thread"); } }}
The behavior of thread execution is influenced by the thread scheduler, a component of the JVM responsible for managing and scheduling threads.
Thread Scheduler
It is a part of JVM and is responsible for scheduling threads. If multiple threads are waiting to get a chance for execution, the order in which threads will be executed is decided by the thread scheduler. We can’t expect the exact algorithm followed by the thread scheduler as it varies from JVM to JVM. Hence, we can’t expect the thread execution order and exact output. Whenever the situation involves multithreading, there is no guarantee of exact output, but we can provide several possible outputs. The following are various possible outputs for the above program:
Due to the unpredictable nature of thread scheduling, any of these scenarios (or variations thereof) could occur during program execution. It’s important to note that while we cannot guarantee a specific output, understanding the behavior of thread scheduling enables us to anticipate various possible outcomes in multithreading scenarios.
Difference between t.start() and t.run()
In the case of t.start(), a new thread will be created which is responsible for the execution of the run method. However, in the case of t.run(), a new thread won’t be created, and the run method will be executed just like a normal method call by the main thread. Hence, in the above program, if we replace t.start() with t.run(), then the output is:
This total output is produced only by the main thread.
Importance of Thread class start method
The start method of the Thread class is responsible for registering the thread with the thread scheduler and performing all other mandatory activities. Without executing the start method of the Thread class, there is no chance of starting a new thread in Java. Due to this, the start method of the Thread class is considered the heart of multithreading.
Java
start(){1.Registerthis thread with the ThreadScheduler2.Perform all other mandatory activities3.Invokerun()}
Each step in the start() method plays a critical role in setting up and launching a new thread, making it indispensable for effective multithreading in Java.
Overloading of the run method
Overloading of the run method is always possible, but the start method of the Thread class will always invoke the no-argument run method. The other overloaded methods must be called explicitly like a normal method call.
For instance:
Java
classMyThreadextendsThread {publicvoidrun() {System.out.println("no arg run"); }publicvoidrun(inti) {System.out.println("int arg run"); }}classThreadDemo {publicstaticvoidmain(String[] args) {MyThreadt = newMyThread();t.start(); // This will invoke the no argument run method implicitly. }}Output: no arg run
Here, although the MyThread class has an overloaded run method that accepts an integer argument, it won’t be invoked implicitly when the thread starts. To execute the overloaded run method with an integer argument, it must be called explicitly like any other method.
Without overriding the run method
If we are not overriding the run method, then the Thread class’s run method will be executed, which has an empty implementation. Hence, we won’t get any output.
Means, If the run method is not overridden in a subclass of Thread, then the default run method provided by the Thread class will be executed. This default run method has an empty implementation, resulting in no output.
Java
classMyThreadextendsThread {// No run method overridden}classTest {publicstaticvoidmain(String[] args) {MyThreadt = newMyThread();t.start(); }}//No output
Note: It is highly recommended to override the run method; otherwise, don’t proceed with the multithreading concept.
Overriding start method
If we override the start method, then our overridden start method will be executed just like a normal method call, and a new thread won’t be created.
Java
classMyThreadextendsThread {publicvoidstart() {System.out.println("start method"); }publicvoidrun() {System.out.println("run method"); }}classTest {publicstaticvoidmain(String[] args) {MyThreadt = newMyThread();t.start();System.out.println("main method"); }}////////////////////////////////outputstart methodmain method// Note: This output is produced only by the main thread.
Here, the output is produced solely by the main thread since no new thread is created. The overridden start method in the MyThread class behaves like a regular method call, printing “start method” before executing the main method. It’s important to note that overriding the start method is not recommended in general, especially when dealing with multithreading concepts.
Note: It is not recommended to override the start method; otherwise, don’t proceed with the multithreading concept.
If we made one small change in the above program, then the output will change as follows:
Java
classMyThreadextendsThread {publicvoidstart() {super.start(); // due to this line two threads run simultaneously that is child and main thread so o/p also varySystem.out.println("Start method"); }publicvoidrun() {System.out.println("run method"); }}classTest {publicstaticvoidmain(String[] args) {MyThreadt = newMyThread();t.start();System.out.println("main Thread"); }}
Here, due to the invocation of super.start() within the overridden start method of the MyThread class, two threads, the child thread, and the main thread, run simultaneously, thereby causing variations in the output. Here are the possible outputs:
Output 1
Java
run method Start method main Thread
In this case, the child thread executes its run method first, followed by the completion of the start method in the child thread, and then the main thread executes its main method.
Output 2
Java
Start method main Threadrun method
Here, the start method in the child thread executes first, followed by the completion of the main method in the main thread, and then the run method in the child thread executes.
Output 3
Java
Start methodrun methodmain Thread
In this case, the start method in the child thread executes first, followed by the execution of the run method in the child thread, and finally, the main method in the main thread executes.
Thread Lifecycle
Threads in Java go through various states during their lifecycle:
New: When a thread is created but not yet started.
Runnable: When a thread is ready to run but waiting for CPU time.
Blocked: When a thread is waiting for a monitor lock to enter a synchronized block or waiting on I/O operations.
Waiting: When a thread is waiting indefinitely for another thread to perform a particular action.
Timed Waiting: Similar to waiting, but with a specified timeout period.
Terminated: When a thread completes its execution or is terminated prematurely.
The life cycle of a thread in Java can be outlined as follows:
New/Born: The thread is created but not yet started. For instance, when you create a new instance of a thread class using MyThread t = new MyThread(), the thread is in the “New” or “Born” state.
Ready/Runnable: After invoking the start() method on the thread object (t.start()), the thread becomes eligible to run, but it’s up to the thread scheduler to allocate processor time. The thread is considered “Ready” or “Runnable” at this stage.
Running: When the thread scheduler allocates processor time to the thread, it enters the “Running” state, and its run() method starts executing. The thread remains in this state until its run() method completes or is paused by the scheduler to allow other threads to run.
Dead: Once the run() method completes execution or the thread is explicitly terminated, the thread enters the “Dead” state. A thread is considered “Dead” when its execution is finished, and it cannot be started again.
What happens if we try to restart an already started thread?
After starting a thread, if we try to restart the same thread once again, then we will get IllegalThreadStateException.
Java
Threadt = newThread();t.start();// Some other code...t.start(); // This will throw IllegalThreadStateException
You create a new thread object t.
You start the thread using t.start(), which transitions the thread from the new state to the runnable state and invokes its run() method.
If you attempt to start the same thread again using t.start(), you’ll encounter the IllegalThreadStateException because the thread is already in the runnable or running state and cannot be restarted.
Note: Attempting to restart a thread that has already been started violates the threading rules in Java, hence the exception. This exception occurs because a thread cannot be restarted once it has already started running or has been terminated.
Defining a Thread by Implementing the Runnable Interface (I)
When defining a thread by implementing the Runnable interface, you create a class that implements the run() method defined in the Runnable interface. The Runnable interface is present in the java.lang package and it contains only one method: public void run(). Then, you pass an instance of this class to the Thread constructor, and invoke the start() method on the Thread object. This approach allows for better flexibility in Java’s multithreading model.
Java
classMyRunnableimplementsRunnable { // defining a thread using the second approach, which is implementing the Runnable interfacepublicvoidrun() {for(inti=0; i<10; i++) { // whatever code inside the run method is the job of the thread and it is executed by the child thread System.out.println("child thread"); } }}classThreadDemo {publicstaticvoidmain(String[] args) {MyRunnabler = newMyRunnable();Threadt = newThread(r); // r is the target runnablet.start();// At this point, two threads started executing simultaneously: the child and main threads for(inti=0; i<10; i++) { // executed by the main thread System.out.println("main thread"); } }}
We will get mixed output, and we can’t determine the exact output.
Here, we defined a class MyRunnable that implements the Runnable interface and overrides its run() method to define the task of the thread.
You create an instance of MyRunnable.
You create a Thread object t and pass the MyRunnable instance to its constructor.
You start the thread using t.start(), which initiates the execution of the thread’s run() method concurrently with the main thread’s execution.
Both the main thread and the child thread execute simultaneously, leading to a mixed output.
Due to the concurrent execution of the main thread and the child thread, the output may vary, resulting in a mixed sequence of messages from both threads.
Case Study
Let’s dive into the different outcomes depending on the situation. consider following example:
t1.start(); // A new thread will be created responsible for the Thread class's run method, which has empty implementation.
In this case , a new thread will be created, but since t1 is not associated with any Runnable object, the new thread will execute the run() method of the Thread class, which has an empty implementation.
Case 2:
Java
t1.run(); // No new thread will be created, and the Thread class's run method will be executed like a normal method call.
Here, No new thread will be created, and the run() method of the Thread class will be executed just like a normal method call.
Case 3:
Java
t2.start(); // A new thread will be created responsible for the execution of the MyRunnable class's run method.
A new thread will be created, and it will be responsible for executing the run() method of the MyRunnable class, as t2 is associated with the MyRunnable instance r.
Case 4:
Java
t2.run(); // No new thread will be created, and the MyRunnable class's run method will be executed like a normal method call.
No new thread will be created, and the run() method of the MyRunnable class will be executed just like a normal method call.
Case 5:
Java
r.start(); // Compile-time error: MyRunnable class doesn't have start capability (CE: cannot find symbol | symbol: method start() | location: class MyRunnable)
This will result in a compile-time error because the MyRunnable class does not have a start() method. It’s the Thread class that has the start() method.
Case 6:
Java
r.run(); // No new thread will be created, and the MyRunnable run method will be executed like a normal method call.
No new thread will be created, and the run() method of the MyRunnable class will be executed like a normal method call.
Which approach is best to define a thread?
Among the two ways of defining a thread, the “implements Runnable” approach is recommended.
In the first approach, our class always extends the Thread class, leaving no chance of extending any other class. Hence, we are missing inheritance benefits.
However, in the second approach, by implementing the Runnable interface, we can extend any other class. Therefore, we won’t miss any inheritance benefits.
Because of the above reasons, implementing the Runnable approach is recommended over extending the Thread class.
Another Way of Defining a Thread (Not Recommended)
Here’s another way of defining a thread in Java, valid but not recommended. Let’s see why.
Eample given above is valid but not recommended approach due to several reasons, including inflexibility and violation of good object-oriented design principles.
However, is it valid to pass a reference to another thread to the Thread constructor like this (new Thread(t))?
java.lang.Thread is shown as extending java.lang.Object and implementing the Runnable interface.
java.lang.Runnable is an interface implemented by the Thread class, represented by the dashed line between them.
Custom classes like MyThread can extend Thread and, therefore, indirectly implement the Runnable interface.
So, passing either Runnable interface or implementing class reference is acceptable and completely valid in Java, but this hybrid approach is not recommended.
In this comprehensive guide, we’ve covered various aspects of Java multithreading, from the basics of creating and synchronizing threads to advanced concepts like thread safety, concurrency issues, and performance tuning. By mastering multithreading in Java and following best practices, you can develop efficient, scalable, and responsive applications that leverage the full power of modern multicore processors. Keep experimenting, learning, and refining your multithreading skills to build robust and high-performance software.