Amol Pawar

What Is Liquibase

What Is Liquibase? A Complete Guide to Database Change Management (2025)

Let me guess. You’ve nailed your application code, CI/CD pipelines are humming, deployments are smooth… until it comes to the database. Suddenly, things get messy. Manual SQL scripts, environment inconsistencies, and mystery errors that only appear in prod. 

Sound familiar..?

That’s where Liquibase comes in.

What Is Liquibase?

Liquibase is an open-source database change management tool. Think of it like version control for your database. Just like Git tracks changes to your code, Liquibase tracks changes to your database schema and ensures those changes are applied safely, consistently, and automatically across environments.

It’s used by developers, DBAs, and DevOps teams to make database changes as agile, traceable, and reliable as code deployments.

Why You Should Care About Database Change Management

If you’re still shipping database changes by emailing SQL files around or copy-pasting commands into a terminal, it’s time for an upgrade.

Database change management matters because:

  • Manual scripts are error-prone
  • Rollback is painful or non-existent
  • Deployments become brittle and unpredictable
  • Audit and compliance? Forget about it

Liquibase solves all of this by bringing structure, automation, and traceability.

The Basics (How It Works)

Liquibase uses changelogs, which are XML, YAML, JSON, or SQL files that define what changes should happen to the database. Each change is a changeset.

Here’s a simple YAML changelog example:

Bash
# db-changelog.yaml

databaseChangeLog:
  - changeSet:
      id: 1
      author: amoljp19
      changes:
        - createTable:
            tableName: user
            columns:
              - column:
                  name: id
                  type: int
                  autoIncrement: true
                  constraints:
                    primaryKey: true
              - column:
                  name: username
                  type: varchar(50)
              - column:
                  name: email
                  type: varchar(100)

Here,

  • Creates a table named user
  • Adds id, username, and email columns
  • Sets id as the primary key

You run it with a command like:

Bash
liquibase --changeLogFile=db-changelog.yaml update

Liquibase will:

  1. Check which changesets have already been run (via a tracking table in your DB)
  2. Run only the new changes
  3. Mark them as completed

Boom. Your DB schema evolves, with no guesswork.

A Real-World Example

Let’s say you want to add a new created_at timestamp column to the user table. Here’s how you’d do it:

YAML
- changeSet:
    id: 2
    author: amoljp19
    changes:
      - addColumn:
          tableName: user
          columns:
            - column:
                name: created_at
                type: timestamp
                defaultValueComputed: CURRENT_TIMESTAMP

Rerun the update command and Liquibase will apply just this new changeset. It’s smart enough to skip anything already applied.

Supported Databases and Formats

Liquibase supports all major relational databases:

  • PostgreSQL
  • MySQL
  • Oracle
  • SQL Server
  • SQLite
  • H2 (for testing)

And you can write changelogs in:

  • YAML (clean and human-readable)
  • XML (verbose but flexible)
  • JSON (for programmatic use)
  • SQL (if you prefer writing raw SQL with comments)

Integration with CI/CD Pipelines

Liquibase plays nicely with Jenkins, GitHub Actions, GitLab CI, Azure DevOps, and other automation tools. You can run it as part of your deployment pipeline to ensure database changes are always in sync with your application code.

Here’s a basic example using GitHub Actions:

YAML
jobs:
  db-update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run Liquibase
        run: |
          liquibase --changeLogFile=db-changelog.yaml \
                    --url=jdbc:postgresql://dbhost:5432/mydb \
                    --username=amoljp \
                    --password=amoljp \
                    update

Rollbacks? Handled.

Every changeset can include a rollback section. Here’s an example:

YAML
- changeSet:
    id: 3
    author: amoljp19
    changes:
      - dropColumn:
          columnName: created_at
          tableName: user
    rollback:
      - addColumn:
          tableName: user
          columns:
            - column:
                name: created_at
                type: timestamp

Want to undo the last change? Run:

Bash
liquibase rollbackCount 1

And just like that, it rolls back one changeset.

Best Practices (2025 Edition)

  1. One change per changeset — Easier to track and rollback.
  2. Use YAML or XML — Cleaner than SQL for most cases.
  3. Version your changelogs in Git — Keep DB and code in sync.
  4. Automate in CI/CD — Manual updates are error magnets.
  5. Test migrations locally — Don’t push straight to prod.

Conclusion

If your database changes are becoming a bottleneck or source of bugs, it’s time to look at Liquibase. It brings the discipline of version control, the safety of rollbacks, and the power of automation to your database.

It’s not just for big teams or enterprises. Even solo developers can benefit from Liquibase by avoiding “it works on my machine” database issues.

In 2025, if you’re not managing your database like code, you’re asking for trouble. Liquibase is your first step toward making database deployments boring, in the best possible way.

Reverse Engineering

What Is Reverse Engineering? Explained: From Concept to Code

Have you ever looked at a finished gadget, app, or piece of code and thought, “How the heck did they build this?” That’s exactly where Reverse Engineering comes in — it’s like digital archaeology for modern tech. Whether you’re a curious developer, cybersecurity enthusiast, or just someone who loves figuring things out, reverse engineering is a fascinating skill to explore.

In this post, we’ll break it all down — from the concept of reverse engineering to real code examples — so you walk away not only knowing what it is, but how to start doing it.

What Exactly Is Reverse Engineering?

At its core, Reverse Engineering is the process of taking something apart to understand how it works — then documenting, modifying, or improving it. While it originally came from mechanical engineering (think tearing down an engine), today it’s widely used in software, cybersecurity, game modding, and even competitive hacking (CTFs).

Imagine you have a compiled program, but no access to the source code. Reverse engineering lets you peel back the layers to uncover the logic, data structures, and behavior hidden inside.

Why Is Reverse Engineering Useful?

Here are a few real-world reasons people dive into reverse engineering:

  • Security research: Find vulnerabilities in apps and systems.
  • Legacy systems: Understand undocumented software to maintain or upgrade it.
  • Malware analysis: Dissect viruses or ransomware to see how they work.
  • Compatibility: Make old software work on new platforms.
  • Learning: Understand how advanced systems are built — great for self-teaching..!

How Does Reverse Engineering Work?

Let’s look at a simplified breakdown of the process:

  1. Observation: Run the program and see what it does.
  2. Disassembly: Use tools to view the compiled binary code (machine language).
  3. Decompilation: Convert low-level code back into a higher-level approximation.
  4. Analysis: Understand data structures, logic flow, and algorithms.
  5. Modification (optional but not recommended): Patch, bypass, or improve the code, but be aware that doing so could violate legal restrictions or terms of service. Proceed with caution.

Types of Reverse Engineering

Let’s split this into two main categories: hardware and software.

Hardware Reverse Engineering

This often involves examining physical components — like circuit boards or mechanical parts. Engineers may take high-resolution images, use 3D scanning, or map out circuitry by hand.

Example: If a critical component in a legacy machine fails, and the manufacturer no longer exists, reverse engineering helps recreate or replace that part.

Software Reverse Engineering

This can be broken into two techniques:

1. Static Analysis

You inspect the software without running it. This involves:

  • Looking at the binary or compiled code
  • Using tools like Ghidra or IDA Free to decompile code into something readable
  • Understanding function names, variables, and logic flow

2. Dynamic Analysis

Here, you run the software and monitor what it does. Tools like OllyDbg, x64dbg, or Wireshark let you:

  • Set breakpoints
  • Watch memory changes
  • Analyze system calls or network activity

Common Tools for Reverse Engineering

Before we jump into code, here are a few tools you’ll often see in reverse engineering:

  • IDA Pro / Ghidra — Disassemblers that help you analyze binaries.
  • x64dbg / OllyDbg — Debuggers for Windows.
  • Radare2 / Cutter — Open-source reverse engineering frameworks.
  • Wireshark — For network traffic inspection.
  • Hex-Rays Decompiler — Converts assembly to pseudocode.

Real-World Example: Code Deconstruction

Let’s say you find a mysterious binary function. After decompiling, you see this assembly code:

ASM
push ebp
mov  ebp, esp
mov  eax, [ebp+8]
add  eax, 5
pop  ebp
ret

Even if you’re not a pro, this pattern is pretty straightforward. Here’s how it works:

  • push ebp / mov ebp, esp: standard setup for a function
  • mov eax, [ebp+8]: grabs the first argument passed to the function
  • add eax, 5: adds 5 to it
  • ret: returns the result

This is likely the compiled version of:

C
int addFive(int x) {
    return x + 5;
}

That’s reverse engineering — working backwards from machine instructions to human-readable logic.

Is Reverse Engineering Legal?

Good question..! The answer isn’t black and white — it largely depends on what you’re doing and where you’re doing it.

If you’re reverse engineering for educational purposes or security research — and not distributing pirated software or stolen code — you’re likely in the clear.

Usually allowed:

  • Security research
  • Interoperability (e.g., making software compatible)
  • Personal use (e.g., restoring old hardware/software you own)

Usually restricted or illegal:

  • Circumventing DRM or copy protection
  • Repackaging and reselling proprietary software or designs
  • Hacking for unauthorized access

Always read license agreements and check local laws carefully before diving in.

 Tips for Getting Started

  • Start small: Pick tiny programs you wrote yourself to disassemble.
  • Practice with CTFs: Platforms like Hack The Box and picoCTF are great.
  • Read reverse engineering write-ups: Learn from real-world examples.
  • Keep learning assembly: It’s the backbone of all binary analysis.
  • Don’t get discouraged: It’s tough at first, but insanely rewarding.

Conclusion

Reverse Engineering isn’t just for hackers in hoodies — it’s a powerful way to understand, learn, and even protect software systems. Whether you’re analyzing malware, figuring out a legacy application, or just learning how binaries work, this skill puts you in control of what’s normally a black box.

By starting small, using the right tools, and staying curious, you can turn the mysterious world of compiled code into something you can read, modify, and even improve.

So next time you encounter an executable and wonder what’s inside, fire up your debugger and take a peek — you might just discover something amazing.

TL;DR: What Is Reverse Engineering?

  • Reverse engineering is the process of analyzing software (or hardware) to understand how it works.
  • It’s widely used in security research, malware analysis, and legacy software support.
  • You can start with simple tools like strings, objdump, and Ghidra.
  • It’s legal in many cases — especially for educational or research purposes.
  • Start small, stay curious, and practice often.

Happy reversing..! 🕵️‍♂️💻

Ransomware 101

Ransomware 101: Everything You Need to Know to Stay Protected

Let’s talk about something that’s become way too common: ransomware. If you’ve never heard of it before, or if you’ve heard the word but aren’t exactly sure what it means, don’t worry — you’re not alone. I wrote this guide to give you the real-world, no-BS breakdown of what ransomware is, how it spreads, and what you can do to protect yourself. Whether you’re running a business or just trying to keep your personal laptop safe, this is for you.

What Is Ransomware, Really?

Ransomware is a type of malicious software (malware) that locks you out of your files or entire system until you pay a ransom. It’s like a digital hostage situation. The attacker usually demands payment in cryptocurrency (like Bitcoin) because it’s harder to trace.

Once it gets into your system, it starts encrypting your files — basically scrambling them so you can’t open anything. Then it flashes a message on your screen saying something like, 

Your files are locked. Pay us $500 in Bitcoin or lose everything.

And here’s the kicker: even if you pay, there’s no guarantee you’ll get your files back.

How Does It Spread?

Ransomware doesn’t just fall from the sky. It usually sneaks in through one of these methods:

  • Phishing Emails: You get an email that looks legit — maybe from your bank or a coworker — with a link or attachment. One click, and boom, you’re infected.
  • Malicious Websites: Sometimes just visiting a shady site can trigger a download in the background.
  • Software Vulnerabilities: Outdated software (especially operating systems or web browsers) can have security holes that ransomware exploits.
  • Compromised USB Drives: Yes, even plugging in an infected USB can do the trick.

Real Talk: Why Ransomware Is a Big Deal

This isn’t just a problem for big companies. Ransomware hits schools, hospitals, local governments, and regular people every day. Some folks lose precious family photos, years of work, or personal records. For businesses, downtime can cost thousands — or millions.

What’s worse, some newer strains of ransomware not only encrypt your files but also threaten to leak them online if you don’t pay. That’s a double whammy.

How to Protect Yourself from Ransomware

Alright, now that we’ve covered the scary part, here’s the good news: you can protect yourself. Here are the essentials:

1. Backup Everything. Regularly.

Make backing up your files a habit. Use an external hard drive or cloud storage (ideally both). If ransomware hits and you have clean backups, you can just wipe your system and restore your stuff.

2. Keep Your Software Updated

Updates aren’t just annoying popups — they fix vulnerabilities that attackers exploit. Turn on automatic updates for your operating system, antivirus, browsers, and any other key software.

3. Use Strong Antivirus & Anti-Malware Tools

Make sure you have a solid antivirus program running. Windows Defender is decent, but for extra peace of mind, consider additional tools like Malwarebytes.

4. Learn to Spot Phishing Emails

If an email seems off, don’t click anything. Look for misspellings, weird addresses, and urgent language. Hover over links before clicking to see where they actually lead.

5. Enable Ransomware Protection (Windows 10/11)

Did you know Windows has built-in ransomware protection?

Windows 10/11 Protection

Here’s how to enable it:

Kotlin
1. Open "Windows Security"
2. Click on "Virus & threat protection"
3. Scroll down to "Ransomware protection"
4. Click "Manage ransomware protection"
5. Turn on "Controlled folder access"

This feature blocks unauthorized apps from accessing important folders.

6. Use Multi-Factor Authentication (MFA)

If someone steals your password, MFA can still block them. It’s a simple way to add a serious layer of protection.

What to Do If You Get Hit

First: Don’t pay the ransom. Paying doesn’t guarantee your files will be restored, and it just funds more attacks.

Here’s what to do:

  • Disconnect from the internet to stop the ransomware from spreading.
  • Scan your system with antivirus/malware tools to identify and remove the infection.
  • Restore from backups if you have them.
  • Report the incident to local authorities or a cybercrime unit.

If you’re stuck and need help, look into organizations like No More Ransom (nomoreransom.org). They offer free decryption tools for certain types of ransomware.

Conclusion

Ransomware isn’t going away anytime soon, but that doesn’t mean you have to live in fear. By understanding how it works and taking some basic steps, you can avoid becoming a victim.

If there’s one takeaway from this post, it’s this: Backup your data today. Seriously. Do it now.

Stay safe out there..! 💻🔒

What Is Selenium

What Is Selenium? A Beginner’s Guide to the #1 Web Testing Tool

What Is Selenium?

Selenium is a free, open-source framework used to automate web browsers. Developers and QA engineers rely on it to:

  • Automate testing of web applications
  • Simulate real user interactions (clicks, typing, navigation)
  • Support multiple languages (Python, Java, JavaScript, C#, Ruby)

Originally created in 2004, Selenium has evolved into the industry-standard tool for browser-based testing. It’s flexible, powerful, and backed by a strong community.

Why Use Selenium?

  1. Cross‑Browser Testing
     Run tests on Chrome, Firefox, Safari, Edge, and more, ensuring consistent behavior across platforms.
  2. Supports Multiple Languages
     Write your tests in the language you love — be it Python, JavaScript, or others.
  3. Community & Ecosystem
     Rich support from blogs, plugins, tutorials, and extensions.
  4. Scalability
     Use Selenium Grid or cloud platforms like Sauce Labs to run tests in parallel.

Core Components of Selenium

Selenium consists of several key parts:

  • Selenium WebDriver: Main tool for controlling browsers.
  • Selenium IDE: Chrome/Firefox extension for record-and-playback testing.
  • Selenium Grid: Enables remote and parallel test execution.

The primary focus here is WebDriver, which interacts with the browser by simulating mouse movements, clicks, form entries, and more. Let’s explore a basic example.

Getting Started with Selenium in Python

Step 1: Install Selenium

Python
pip install selenium

You also need a WebDriver executable for your browser (e.g., chromedriver for Chrome).

Step 2: Write Your First Test

Create a file named test_google_search.py:

Python
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

# 1. Launch browser
driver = webdriver.Chrome()

try:
    # 2. Go to Google
    driver.get('https://www.google.com')

    # 3. Locate search box
    search_box = driver.find_element('name', 'q') 

    # 4. Type and press Enter
    search_box.send_keys('Selenium testing')
    search_box.send_keys(Keys.RETURN)

    # 5. Print title
    print("Page title is:", driver.title)

finally:
    # 6. Close browser
    driver.quit()

What’s happening?

  1. Importing modules — we bring in webdriver and Keys for browser control and keyboard interaction.
  2. driver = webdriver.Chrome() — opens a Chrome session via the WebDriver executable.
  3. .get() — navigates to the target URL.
  4. find_element — locates the search input using its name attribute.
  5. send_keys() — simulates typing and pressing Enter.
  6. driver.title — fetches the current page title.
  7. finally: driver.quit() — guarantees the browser closes even if errors occur.

Expanding the Example: Assertion & Cleanup

Let’s assert that the title contains “Selenium”:

Python
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('https://www.google.com')
search_box = driver.find_element('name', 'q')
search_box.send_keys('Selenium testing')
search_box.send_keys(Keys.RETURN)

assert 'Selenium' in driver.title, "Selenium not found in title"
print("Test passed. Title contains 'Selenium'")

driver.quit()
  • assert statement verifies expected behavior.
  • Cleaner flow without try/finally, but you’ll want try/finally in real-world tests for safety.

Tips for Clean Selenium Code

1. Use explicit waits, not time.sleep, to wait for page elements:

Python
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

wait = WebDriverWait(driver, 10)
element = wait.until(EC.presence_of_element_located((By.NAME, 'q')))

2. Use Page Object Model (POM) to organize locators and actions into classes.

3. Parameterize tests (e.g., search terms) to reuse code.

4. Log actions and screenshots for easier debugging.

Advanced Features & Ecosystem

  • Selenium Grid: Run tests in parallel on multiple browsers/OS combos.
  • Headless Mode: Use headless browsers to save resources.
  • Cloud Integration: Services like BrowserStack and Sauce Labs support Selenium out of the box.
  • Extensions: Community libraries like pytest-selenium and selenium-page-factory help structure and scale tests.

Is Selenium Right for You?

Use selenium if you need to:

  • Automate browser tasks
  • Test cross-browser web apps
  • Use code-based testing with real user actions

Avoid or complement selenium if:

  • You need non-UI tests (unit tests, API tests) → use pytest, requests, etc.
  • You need visual regression testing → use tools like Applitools.
  • You’re testing mobile apps exclusively → Appium would be better.

Your Next Steps

  1. Install WebDriver for your browser and run the sample script.
  2. Add assertions and waits to make tests robust.
  3. Explore pytest or unittest integration for test suites.
  4. Try out Selenium Grid or cloud services for large-scale testing.

Conclusion

Selenium is a powerful, established tool that helps you automate web browsers exactly how a user would interact with them. With support for multiple languages, browsers, and platforms, it’s an essential component in web test automation. By following clean code practices, using waits, and organizing your tests, you’ll master selenium quickly — and write reliable, maintainable tests.

Bundles in libs.versions.toml

Say Goodbye to Repetition: How Bundles in libs.versions.toml Simplify Android Projects

If you’re tired of repeating the same dependencies across different modules in your Android project, you’re not alone. Managing dependencies manually is error-prone, messy, and not scalable. Fortunately, Bundles in libs.versions.toml offer a clean and modern solution that saves time and reduces duplication. Let’s break it down, step by step, in a simple way.

What Is libs.versions.toml?

Starting with Gradle 7 and Android Gradle Plugin 7+, Google introduced Version Catalogs — a new way to centralize and manage dependencies. Instead of scattering dependency strings across multiple build.gradle files, you can now define everything in a single place: libs.versions.toml.

This TOML (Tom’s Obvious Minimal Language) file lives in your project’s gradle folder and acts as your master dependency list.

Here’s what a basic libs.versions.toml file looks like:

Kotlin
[versions]
kotlin = "1.9.0"
coroutines = "1.7.1"

[libraries]
kotlin-stdlib = { module = "org.jetbrains.kotlin:kotlin-stdlib", version.ref = "kotlin" }
coroutines-core = { module = "org.jetbrains.kotlinx:kotlinx-coroutines-core", version.ref = "coroutines" }

That’s great — but what if you’re using the same group of libraries in every module? Writing them out repeatedly is a waste of time. That’s where Bundles in libs.versions.toml come to the rescue.

What Are Bundles?

Bundles are a feature of version catalogs that let you group related dependencies under a single name. Think of them like playlists for your libraries. Instead of referencing each dependency one by one, you just call the bundle, and you’re done.

Why Use Bundles?

  • Clean, organized code
  • No repeated dependencies
  • Easy updates across modules
  • Better modularization

How to Create a Bundle in libs.versions.toml

Let’s say you’re using multiple Jetpack Compose libraries across several modules. Without bundles, you’d need to add each one like this:

Kotlin
implementation(libs.compose.ui)
implementation(libs.compose.material)
implementation(libs.compose.tooling)

With Bundles in libs.versions.toml, you can simplify it like this:

Step 1: Define the Libraries

Kotlin
[versions]
compose = "1.5.0"

[libraries]
compose-ui = { module = "androidx.compose.ui:ui", version.ref = "compose" }
compose-material = { module = "androidx.compose.material:material", version.ref = "compose" }
compose-tooling = { module = "androidx.compose.ui:ui-tooling", version.ref = "compose" }

Step 2: Create a Bundle

Kotlin
[bundles]
compose = ["compose-ui", "compose-material", "compose-tooling"]

How to Use Bundles in build.gradle.kts

In your module’s build.gradle.kts file, just add:

Kotlin
implementation(libs.bundles.compose)

That one-liner brings in all the Compose dependencies you need. Clean, right?

Real-World Use Case: Networking Stack

Let’s say you always use Retrofit, Moshi, and OkHttp in your data modules. Define a bundle like this:

Kotlin
[versions]
retrofit = "2.9.0"
moshi = "1.13.0"
okhttp = "4.10.0"

[libraries]
retrofit-core = { module = "com.squareup.retrofit2:retrofit", version.ref = "retrofit" }
moshi-core = { module = "com.squareup.moshi:moshi", version.ref = "moshi" }
okhttp-core = { module = "com.squareup.okhttp3:okhttp", version.ref = "okhttp" }

[bundles]
networking = ["retrofit-core", "moshi-core", "okhttp-core"]

Then in your module:

Kotlin
implementation(libs.bundles.networking)

You’ve just replaced three lines with one — and centralized version control in the process.

Common Mistakes to Avoid

  • Wrong syntax: The bundle array must reference exact keys from the [libraries] block.
  • Missing versions: Always define versions under [versions] and refer using version.ref.
  • Not reusing bundles: If two modules share the same libraries, don’t duplicate — bundle them.

Why Bundles in libs.versions.toml Matter for Android Developers

Bundles in libs.versions.toml are more than just a convenience—they’re a best practice. They improve your project structure, reduce maintenance overhead, and make scaling a breeze. Whether you’re working solo or on a large team, bundling dependencies is the smart way to manage complexity.

If you’re building modular Android apps (and let’s face it, who isn’t in 2025?), adopting bundles is a no-brainer.

Conclusion

The old way of managing dependencies is clunky and outdated. With Bundles in libs.versions.toml, you can streamline your workflow, stay DRY (Don’t Repeat Yourself), and future-proof your project.

Say goodbye to repetitive implementation lines and hello to clean, maintainable build scripts.

Start bundling today — and give your Android project the structure it deserves.

Proto DataStore

Proto DataStore in Android: How to Store Complex Objects with Protocol Buffers

Managing data on Android has evolved significantly over the years. From SharedPreferences to Room, we’ve seen the full spectrum. But when it comes to storing structured, complex data in a lightweight and efficient way, Proto DataStore steps in as a game-changer.

In this blog, we’ll walk through Proto DataStore, how it works under the hood, and how to use it with Protocol Buffers to store complex objects. We’ll also look at how it stacks up against the older SharedPreferences and why it’s the better modern choice.

Let’s break it down step by step.

What is Proto DataStore?

Proto DataStore is a Jetpack library from Google that helps you store typed objects persistently using Protocol Buffers (protobuf), a fast and efficient serialization format.

It’s:

  • Type-safe
  • Asynchronous
  • Corruption-resistant
  • Better than SharedPreferences

Unlike Preferences DataStore, which stores data in key-value pairs (similar to SharedPreferences), Proto DataStore is ideal for storing structured data models.

Why Use Proto DataStore?

Here’s why developers love Proto DataStore:

  • Strong typing — Your data models are generated and compiled, reducing runtime errors.
  • Speed — Protocol Buffers are faster and more compact than JSON or XML.
  • Safe and robust — Built-in corruption handling and data migration support.
  • Asynchronous API — Uses Kotlin coroutines and Flow, keeping your UI smooth.

Store Complex Objects with Proto DataStore

Let’s go hands-on. Suppose you want to save a user profile with fields like name, email, age, and preferences.

Step 1: Add the Dependencies

Add these to your build.gradle (app-level):

Kotlin
dependencies {
    implementation "androidx.datastore:datastore:1.1.0"
    implementation "androidx.datastore:datastore-core:1.1.0"
    implementation "com.google.protobuf:protobuf-javalite:3.25.1"
}

In your build.gradle (project-level), enable Protobuf:

Kotlin
protobuf {
    protoc {
        artifact = "com.google.protobuf:protoc:3.25.1"
    }

    generateProtoTasks {
        all().each { task ->
            task.builtins {
                java { }
            }
        }
    }
}

Also apply plugins at the top:

Kotlin
plugins {
    id 'com.google.protobuf' version '0.9.4'
    id 'kotlin-kapt'
}

Step 2: Define Your .proto File

Create a file named user.proto inside src/main/proto/:

Kotlin
syntax = "proto3";

option java_package = "com.softaai.datastore";
option java_multiple_files = true;

message UserProfile {
  string name = 1;
  string email = 2;
  int32 age = 3;
  bool isDarkMode = 4;
}

This defines a structured data model for the user profile.

Step 3: Create the Serializer

Create a Kotlin class that implements Serializer<UserProfile>:

Kotlin
object UserProfileSerializer : Serializer<UserProfile> {
    override val defaultValue: UserProfile = UserProfile.getDefaultInstance()

    override suspend fun readFrom(input: InputStream): UserProfile {
        return UserProfile.parseFrom(input)
    }

    override suspend fun writeTo(t: UserProfile, output: OutputStream) {
        t.writeTo(output)
    }
}

This handles how the data is read and written to disk using protobuf.

Step 4: Initialize the Proto DataStore

Create a DataStore instance in your repository or a singleton:

Kotlin
val Context.userProfileDataStore: DataStore<UserProfile> by dataStore(
    fileName = "user_profile.pb",
    serializer = UserProfileSerializer
)

Now you can access this instance using context.userProfileDataStore.

Step 5: Read and Write Data

Here’s how you read the stored profile using Kotlin Flow:

Kotlin
val userProfileFlow: Flow<UserProfile> = context.userProfileDataStore.data

To update the profile:

Kotlin
suspend fun updateUserProfile(context: Context) {
    context.userProfileDataStore.updateData { currentProfile ->
        currentProfile.toBuilder()
            .setName("Amol Pawar")
            .setEmail("[email protected]")
            .setAge(28)
            .setIsDarkMode(true)
            .build()
    }
}

Easy, clean, and fully type-safe.

Bonus: Handling Corruption and Migration

Handle Corruption Gracefully

You can customize the corruption handler if needed:

Kotlin
val Context.safeUserProfileStore: DataStore<UserProfile> by dataStore(
    fileName = "user_profile.pb",
    serializer = UserProfileSerializer,
    corruptionHandler = ReplaceFileCorruptionHandler {
        UserProfile.getDefaultInstance()
    }
)

Migrate from SharedPreferences

If you’re switching from SharedPreferences:

Kotlin
val Context.migratedUserProfileStore: DataStore<UserProfile> by dataStore(
    fileName = "user_profile.pb",
    serializer = UserProfileSerializer,
    produceMigrations = { context ->
        listOf(SharedPreferencesMigration(context, "old_prefs_name"))
    }
)

When to Use Proto DataStore

Use Proto DataStore when:

  • You need to persist complex, structured data.
  • You care about performance and file size.
  • You want a modern, coroutine-based data solution.

Avoid it for relational data (instead use Room) or for simple flags (Preferences DataStore may suffice).

Conclusion

Proto DataStore is the future-forward way to store structured data in Android apps. With Protocol Buffers at its core, it combines speed, safety, and type-safety into one clean package.

Whether you’re building a user profile system, app settings, or configuration storage, Proto DataStore helps you stay efficient and future-ready.

TL;DR

Q: What is Proto DataStore in Android?
 A: Proto DataStore is a modern Jetpack library that uses Protocol Buffers to store structured, type-safe data asynchronously and persistently.

Q: How do I store complex objects using Proto DataStore?
 A: Define a .proto schema, set up a serializer, initialize the DataStore, and read/write using Flow and coroutines.

Q: Why is Proto DataStore better than SharedPreferences?
 A: It’s type-safe, faster, handles corruption, and integrates with Kotlin coroutines.

Jetpack DataStore in Android

Mastering Jetpack DataStore in Android: The Modern Replacement for SharedPreferences

If you’re still using SharedPreferences in your Android app, it’s time to move forward. Google introduced Jetpack DataStore as a modern, efficient, and fully asynchronous solution for storing key-value pairs and typed objects. In this blog, we’ll break down what Jetpack DataStore is, why it’s better than SharedPreferences, and how you can use it effectively in your Android projects.

What Is Jetpack DataStore?

Jetpack DataStore is part of Android Jetpack and is designed to store small amounts of data. It comes in two flavors:

  • Preferences DataStore — stores key-value pairs, similar to SharedPreferences.
  • Proto DataStore — stores typed objects using Protocol Buffers.

Unlike SharedPreferences, Jetpack DataStore is built on Kotlin coroutines and Flow, making it asynchronous and safe from potential ANRs (Application Not Responding errors).

Why Replace SharedPreferences?

SharedPreferences has been around for a long time but comes with some baggage:

  • Synchronous API — can block the main thread.
  • Lacks error handling — fails silently.
  • Not type-safe — you can run into ClassCastExceptions easily.

Jetpack DataStore solves all of these with:

  • Coroutine support for non-blocking IO.
  • Strong typing with Proto DataStore.
  • Built-in error handling.
  • Better consistency and reliability.

Setting Up Jetpack DataStore

To start using Jetpack DataStore, first add the required dependencies to your build.gradle:

Kotlin
implementation "androidx.datastore:datastore-preferences:1.0.0"
implementation "androidx.datastore:datastore-core:1.0.0"

For Proto DataStore:

Kotlin
implementation "androidx.datastore:datastore:1.0.0"
implementation "com.google.protobuf:protobuf-javalite:3.14.0"

Also, don’t forget to apply the protobuf plugin if using Proto:

Kotlin
id 'com.google.protobuf' version '0.8.12'

Using Preferences DataStore

Step 1: Create the DataStore instance

Jetpack DataStore is designed to be singleton-scoped. The recommended way is to create it as an extension property on Context:

Kotlin
val Context.dataStore: DataStore<Preferences> by preferencesDataStore(name = "user_prefs")

Here, preferencesDataStore creates a singleton DataStore instance. This ensures you have a single DataStore instance per file, avoiding memory leaks and data corruption.

Step 2: Define keys

Kotlin
val USER_NAME = stringPreferencesKey("user_name")
val IS_LOGGED_IN = booleanPreferencesKey("is_logged_in")

stringPreferencesKey and booleanPreferencesKey help define the keys.

Step 3: Write data

To write data, use the edit function, which is fully asynchronous and safe to call from any thread:

Kotlin
suspend fun saveUserData(context: Context, name: String, isLoggedIn: Boolean) {
    context.dataStore.edit { preferences ->
        preferences[USER_NAME] = name
        preferences[IS_LOGGED_IN] = isLoggedIn
    }
}

Here, edit suspends while the data is being written, ensuring no UI thread blocking.

Step 4: Read data

To read data, use Kotlin Flows, which emit updates whenever the data changes:

Kotlin
val userNameFlow: Flow<String> = context.dataStore.data
    .map { preferences ->
        preferences[USER_NAME] ?: ""
    }

Here, data is accessed reactively using Kotlin Flow, returns a Flow<String> that emits the username whenever it changes. You can collect this Flow in a coroutine or observe it in Jetpack Compose.

Real-World Use Case: User Login State

Let’s say you want to keep track of whether a user is logged in. Here’s how you do it:

Save login state:

Kotlin
suspend fun setLoginState(context: Context, isLoggedIn: Boolean) {
    context.dataStore.edit { prefs ->
        prefs[IS_LOGGED_IN] = isLoggedIn
    }
}

Observe login state:

Kotlin
val loginState: Flow<Boolean> = context.dataStore.data
    .map { prefs -> prefs[IS_LOGGED_IN] ?: false }

This setup lets your app reactively respond to changes in the login state, such as redirecting users to the login screen or the home screen.

Migrating from SharedPreferences

Jetpack DataStore makes migration easy with SharedPreferencesMigration:

Kotlin
import androidx.datastore.preferences.SharedPreferencesMigration

val Context.dataStore by preferencesDataStore(
    name = USER_PREFERENCES_NAME,
    produceMigrations = { context ->
        listOf(SharedPreferencesMigration(context, USER_PREFERENCES_NAME))
    }
)
  • Migration runs automatically before any DataStore access.
  • Once migrated, stop using the old SharedPreferences to avoid data inconsistency.

Using Proto DataStore (Typed Data)

Proto DataStore requires you to define a .proto schema file.

Step 1: Define the Proto schema

user_prefs.proto

Kotlin
syntax = "proto3";

option java_package = "com.softaai.sitless";
option java_multiple_files = true;

message UserPreferences {
  string user_name = 1;
  bool is_logged_in = 2;
}

Step 2: Create the serializer

Kotlin
object UserPreferencesSerializer : Serializer<UserPreferences> {
    override val defaultValue: UserPreferences = UserPreferences.getDefaultInstance()

    override suspend fun readFrom(input: InputStream): UserPreferences {
        return UserPreferences.parseFrom(input)
    }

    override suspend fun writeTo(t: UserPreferences, output: OutputStream) {
        t.writeTo(output)
    }
}

Step 3: Initialize Proto DataStore

Kotlin
val Context.userPreferencesStore: DataStore<UserPreferences> by dataStore(
    fileName = "user_prefs.pb",
    serializer = UserPreferencesSerializer
)

Step 4: Update and read data

Kotlin
suspend fun updateUser(context: Context, name: String, isLoggedIn: Boolean) {
    context.userPreferencesStore.updateData { prefs ->
        prefs.toBuilder()
            .setUserName(name)
            .setIsLoggedIn(isLoggedIn)
            .build()
    }
}

val userNameFlow = context.userPreferencesStore.data
    .map { it.userName }

Best Practices

  • Use Proto DataStore when your data model is complex or needs strong typing.
  • Use Preferences DataStore for simple key-value storage.
  • Always handle exceptions using catch when collecting flows.
  • Avoid main-thread operations; DataStore is built for background execution.

Conclusion

Jetpack DataStore is not just a replacement for SharedPreferences; it’s an upgrade in every sense. With better performance, safety, and modern API design, it’s the future of local data storage in Android.

If you’re building a new Android app or refactoring an old one, now’s the perfect time to switch. By embracing Jetpack DataStore, you’re not only writing cleaner and safer code, but also aligning with best practices endorsed by Google.

Use-Site

What Happens If You Don’t Specify a Use-Site Target in Kotlin?

In Kotlin, annotations can target multiple elements of a declaration — such as a field, getter, or constructor parameter. When you apply an annotation without explicitly specifying a use-site target (e.g., @MyAnnotation instead of @field:MyAnnotation), Kotlin tries to infer the most appropriate placement.

This default behavior often works well — but in some cases, especially when interoperating with Java frameworks, it can produce unexpected results. Let’s dive into how it works, and what’s changing with Kotlin 2.2.0.

Default Target Inference in Kotlin (Before 2.2.0)

If the annotation supports multiple targets (defined via its @Target declaration), Kotlin infers where to apply the annotation based on context. This is especially relevant for primary constructor properties.

Kotlin
annotation class MyAnnotation
class User(
    @MyAnnotation val name: String
)

In this case, Kotlin might apply @MyAnnotation to the constructor parameter, property, or field—depending on what @MyAnnotation allows.

Approximate Priority Order:

When multiple targets are applicable, Kotlin historically followed a rough order of priority:

  1. param – Constructor parameter
  2. property – The Kotlin property itself
  3. field – The backing field generated in bytecode

But this is not a strict rule — the behavior varies by context and Kotlin version.

Interop with Java Frameworks: Why Target Matters

Kotlin properties can generate several elements in Java bytecode:

  • A backing field
  • A getter method (and setter for var)
  • A constructor parameter (for primary constructor properties)

Java frameworks (like Jackson, Spring, Hibernate) often look for annotations in specific places — typically on the field or getter. If Kotlin places the annotation somewhere else (e.g., the property), the framework might not recognize it.

Kotlin
class User(
    @JsonProperty("username") val name: String
)

If @JsonProperty is placed on the property instead of the field, Jackson may not detect it correctly. The fix is to use an explicit target:

Kotlin
class User(
    @field:JsonProperty("username") val name: String
)

Kotlin 2.2.0: Refined Defaulting with -Xannotation-default-target

Kotlin 2.2.0 introduces a new experimental compiler flag:

Kotlin
-Xannotation-default-target=param-property

When enabled, this flag enforces a consistent and more predictable defaulting strategy, especially suited for Java interop.

New Priority Order:

  1. param – If valid, apply to the constructor parameter
  2. property – If param isn’t valid, and property is
  3. field – If neither param nor property is valid, but field is
  4. Error — If none of these are allowed, compilation fails

This makes annotation behavior more intuitive, especially when integrating with Java-based tools and frameworks.

The @all: Meta-Target (Experimental)

Kotlin 2.2.0 also introduces the experimental @all: use-site target, which applies an annotation to all applicable parts of a property:

  • param (constructor parameter)
  • property (Kotlin-level property)
  • field (backing field)
  • get (getter)
  • set (setter, if var)

Example:

Kotlin
@all:MyAnnotation<br>var name: String = ""

This is equivalent to writing:

Kotlin
@param:MyAnnotation
@property:MyAnnotation
@field:MyAnnotation
@get:MyAnnotation
@set:MyAnnotation

Only the targets supported in the annotation’s @Target list will be applied.

Best Practices

Here’s how to work with Kotlin annotations effectively:

ScenarioRecommendation
Using annotations with Java frameworksUse explicit use-site targets (@field:, @get:)
Want consistent defaultingEnable -Xannotation-default-target=param-property
Want broad annotation coverageUse @all: (if supported by the annotation)
Unsure where an annotation is being appliedUse the Kotlin compiler flag -Xemit-jvm-type-annotations and inspect bytecode or decompiled Java

Conclusion

While Kotlin’s inferred annotation targets are convenient, they don’t always align with Java’s expectations. Starting with Kotlin 2.2.0, you get more control and predictability with:

  • Explicit use-site targets
  • A refined defaulting flag (-Xannotation-default-target)
  • The @all: meta-target for multi-component coverage

By understanding and controlling annotation placement, you’ll avoid hidden bugs and ensure smooth Kotlin–Java interop.

how to apply annotations in Kotlin

How to Apply Annotations in Kotlin: Best Practices & Examples

Annotations are a powerful feature in Kotlin that let you add metadata to your code. Whether you’re working with frameworks like Spring, Dagger, or Jetpack Compose, or building your own tools, knowing how to apply annotations in Kotlin can drastically improve your code’s readability, structure, and behavior.

In this guide, we’ll walk through everything step by step, using real examples to show how annotations work in Kotlin. You’ll see how to use them effectively, with clean code and clear explanations along the way..

What Are Annotations in Kotlin?

Annotations are like sticky notes for the compiler. They don’t directly change the logic of your code but tell tools (like compilers, IDEs, and libraries) how to handle certain elements.

If you use @JvmStatic, Kotlin will generate a static method that Java can call without needing to create an object. It helps bridge Kotlin and Java more smoothly.?

Kotlin
object Utils {
    @JvmStatic
    fun printMessage(msg: String) {
        println(msg)
    }
}

This makes printMessage() callable from Java without creating an instance of Utils.

How to Apply Annotations in Kotlin

To apply an annotation in Kotlin, you use the @ symbol followed by the annotation’s name at the beginning of the declaration you want to annotate. You can apply annotations to functions, classes, and other code elements. Let’s see some examples:

Here’s an example using the JUnit framework, where a test method is marked with the @Test annotation:

Kotlin
import org.junit.*

class MyTest {
    @Test
    fun testTrue() {
        Assert.assertTrue(true)
    }
}

In Kotlin, annotations can have parameters. Let’s take a look at the @Deprecated annotation as a more interesting example. It has a replaceWith parameter, which allows you to provide a replacement pattern to facilitate a smooth transition to a new version of the API. The following code demonstrates the usage of annotation arguments, including a deprecation message and a replacement pattern:

Kotlin
@Deprecated("Use removeAt(index) instead.", ReplaceWith("removeAt(index)"))
fun remove(index: Int) { ... }

In this case, when someone uses the remove function in their code, the IDE will not only show a suggestion to use removeAt instead, but it will also offer a quick fix to automatically replace the remove function with removeAt. This makes it easier to update your code and follow the recommended practices.

Annotations in Kotlin can have arguments of specific types, such as primitive types, strings, enums, class references, other annotation classes, and arrays of these types. The syntax for specifying annotation arguments is slightly different from Java:

To specify a class as an annotation argument, use the ::class syntax:

When you want to specify a class as an argument for an annotation, you can use the ::class syntax.

Kotlin
@MyAnnotation(MyClass::class)

In this case, let’s say you have a custom annotation called @MyAnnotation, and you want to pass a class called MyClass as an argument to that annotation. In this case, you can use the ::class syntax like this: @MyAnnotation(MyClass::class).

By using ::class, you are referring to the class itself as an object. It allows you to pass the class reference as an argument to the annotation, indicating which class the annotation is associated with.

To specify another annotation as an argument, don’t use the @ character before the annotation name:

when specifying an annotation as an argument for another annotation, you don’t need to use the “@” symbol before the annotation name.

Kotlin
@Deprecated(replaceWith = ReplaceWith("removeAt(index)"))
fun remove(index: Int) { ... }

In the above example, the @Deprecated annotation. It allows you to provide a replacement pattern using the ReplaceWith annotation. In this case, you simply specify the ReplaceWith annotation without the “@” symbol when using it as an argument for @Deprecated .

By omitting the “@” symbol, you indicate that the argument is another annotation.

To specify an array as an argument, use the arrayOf function:

if you want to specify an array as an argument for an annotation, you can use the arrayOf function.

For example, let’s say you have an annotation called @RequestMapping with a parameter called path, and you want to pass an array of strings ["/foo", "/bar"] as the value for that parameter. In this case, you can use the arrayOf function like this:

Kotlin
@RequestMapping(path = arrayOf("/foo", "/bar"))

However, if the annotation class is declared in Java, you don’t need to use the arrayOf function. In Java, the parameter named value in the annotation is automatically converted to a vararg parameter if necessary. This means you can directly provide the values without using the arrayOf function.

To use a property as an annotation argument, you need to mark it with a const modifier:

In Kotlin, annotation arguments need to be known at compile time, which means you cannot refer to arbitrary properties as arguments. However, you can use the const modifier to mark a property as a compile-time constant, allowing you to use it as an annotation argument.

To use a property as an annotation argument, follow these steps:

  1. Declare the property using the const modifier at the top level of a file or inside an object.
  2. Initialize the property with a value of a primitive type or a String.

Here’s an example using JUnit’s @Test annotation that specifies a timeout for a test:

Kotlin
const val TEST_TIMEOUT = 100L

@Test(timeout = TEST_TIMEOUT)
fun testMethod() {
    // Test code goes here
}

In this example, TEST_TIMEOUT is declared as a const property with a value of 100L. The timeout parameter of the @Test annotation is then set to the value of TEST_TIMEOUT. This allows you to specify the timeout value as a constant that can be reused and easily changed if needed.

Remember that properties marked with const need to be declared at the top level of a file or inside an object, and they must be initialized with values of primitive types or String. Using regular properties without the const modifier will result in a compilation error with the message “Only ‘const val’ can be used in constant expressions.”

Best Practices for Using Annotations

Using annotations the right way keeps your Kotlin code clean and powerful. Here are some tips:

1. Use Target and Retention Wisely

  • @Target specifies where your annotation can be applied: classes, functions, properties, etc.
  • @Retention controls how long the annotation is kept: source code only, compiled classes, or runtime.
Kotlin
@Target(AnnotationTarget.FUNCTION)
@Retention(AnnotationRetention.RUNTIME)
annotation class LogExecutionTime

Use RUNTIME if your annotation will be read by reflection.

2. Keep Annotations Lightweight

Avoid stuffing annotations with too many parameters. Use defaults whenever possible to reduce clutter.

Kotlin
annotation class Audit(val user: String = "system")

3. Document Custom Annotations

Treat annotations like part of your public API. Always include comments and KDoc.

Kotlin
/**
 * Indicates that the method execution time should be logged.
 */
@Target(AnnotationTarget.FUNCTION)
@Retention(AnnotationRetention.RUNTIME)
annotation class LogExecutionTime

Example: Logging Execution Time

Let’s say you want to log how long your functions take to execute. You can create a custom annotation and use reflection to handle it.

Kotlin
@Target(AnnotationTarget.FUNCTION)
@Retention(AnnotationRetention.RUNTIME)
annotation class LogExecutionTime

class Worker {
    @LogExecutionTime
    fun doWork() {
        Thread.sleep(1000)
        println("Work done!")
    }
}

Now add logic to read the annotation:

Kotlin
fun runWithLogging(obj: Any) {
    obj::class.members.forEach { member ->
        if (member.annotations.any { it is LogExecutionTime }) {
            val start = System.currentTimeMillis()
            (member as? KFunction<*>)?.call(obj)
            val end = System.currentTimeMillis()
            println("Execution time: ${end - start} ms")
        }
    }
}

fun main() {
    val worker = Worker()
    runWithLogging(worker)
}

This will automatically time any function marked with @LogExecutionTime. Clean and effective.

Advanced Use Case: Dependency Injection with Dagger

In Dagger or Hilt, annotations are essential. You don’t write much logic yourself; instead, annotations do the work.

Kotlin
@Module
@InstallIn(SingletonComponent::class)
object NetworkModule {

    @Provides
    fun provideApiService(): ApiService {
        return Retrofit.Builder()
            .baseUrl("https://api.softaai.com")
            .build()
            .create(ApiService::class.java)
    }
}

Here, @Module, @Provides, and @InstallIn drive the dependency injection system. Once you learn how to apply annotations in Kotlin, libraries like Dagger become far less intimidating.

Conclusion

Annotations in Kotlin are more than decoration — they’re metadata with a purpose. Whether you’re customizing behavior, interfacing with Java, or using advanced frameworks, knowing how to apply annotations in Kotlin gives you a real edge.

Quick Recap:

  • Use annotations to add metadata.
  • Apply built-in annotations to boost interoperability and performance.
  • Create your own annotations for clean, reusable logic.
  • Follow best practices: target, retention, defaults, and documentation.

With the right approach, annotations make your Kotlin code smarter, cleaner, and easier to scale.

Kotlin Use-Site Target Annotations

Kotlin Use-Site Target Annotations Explained with Real-World Examples

Kotlin has quickly become a developer favorite for its expressiveness and safety features. One powerful but often overlooked feature is Kotlin Use-Site Target Annotations. While annotations in Kotlin are commonly used, specifying a use-site target adds precision to how these annotations behave. Whether you’re building Android apps or backend services, understanding this feature can help...

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
error: Content is protected !!