Amol Pawar

Linked List

How to Create a Linked List in Kotlin: Easy Step-by-Step Tutorial

If you’re diving into the world of Kotlin and exploring data structures, you’ve probably come across linked lists. While arrays and lists are common in Kotlin development, understanding linked lists can open up new levels of flexibility in your coding journey. In this guide, we’ll unravel what linked lists are, why you might need them, and most importantly, how to create and use linked lists in Kotlin.

What is a Linked List?

A Linked List is a data structure consisting of a sequence of elements, called nodes. 

Each node has two components:

  • Data: The value we want to store.
  • Next: A reference to the next node in the sequence.

Unlike arrays, Linked Lists are dynamic in size, offering efficient insertions and deletions at any position in the list.

In a linked list, each node stores a value and points to the next node in the chain. The last node in the sequence points to “null,” indicating the end of the list.

Linked lists have several advantages over arrays or ArrayLists in Kotlin:

  • Quick insertions and removals at the front of the list.
  • Consistent performance for operations, especially for inserting or removing elements anywhere in the list.

Types of Linked Lists

  1. Singly Linked List — Each node points to the next node in the sequence (we’ll focus on this one).
  2. Doubly Linked List — Each node has a reference to both the next and the previous node.
  3. Circular Linked List — The last node points back to the first node, forming a loop.

Building a Singly Linked List in Kotlin

Kotlin doesn’t offer a built-in linked list class like Java does. But no worries! We’re going to create our own custom Singly Linked List step-by-step. Let’s create a linked list from scratch! We’ll start by defining the Node class and then build a LinkedList class to manage the nodes.

Defining the Node Class

Each node needs to store data and a reference to the next node. Here’s our Node class:

Kotlin
// We define a node of the linked list as a data class, where it holds a value and a reference to the next node.

data class Node<T>(var value: T, var next: Node<T>? = null) {
    override fun toString(): String {
        return if (next != null) "$value -> ${next.toString()}" else "$value"
    }
}

fun main() {
    val node1 = Node(value = 1)
    val node2 = Node(value = 2)
    val node3 = Node(value = 3)

    node1.next = node2
    node2.next = node3   //here node3 points to null at last, as per our code we only print its value
    println(node1)
}

//OUTPUT

1 -> 2 -> 3

Here, we defined a generic Node class for a linked list in Kotlin. Each Node holds a value of any type (T) and a reference to the next Node, which can be null. The toString() method provides a custom string representation for the node, recursively displaying the value of the node followed by the values of subsequent nodes, separated by ->. If the node is the last in the list, it simply shows its value.

Have you observed how we constructed the list above? We essentially created a chain of nodes by linking their ‘next’ references. However, building lists in this manner becomes impractical as the list grows larger. To address this, we can use a LinkedList, which simplifies managing the nodes and makes the list easier to work with. Let’s explore how we can implement this in Kotlin.

Creating the LinkedList Class

Let’s create our LinkedList class and add core functionalities like adding nodes and displaying the list.

Basically, a linked list has a ‘head’ (the first node) and a ‘tail’ (the last node). In a singly linked list, we usually only deal with the head node, although the tail node can also be relevant, especially when adding elements at the end. The tail node becomes more important in doubly linked lists or circular linked lists, where it supports bidirectional traversal or maintains circular references. However, here, we will use both nodes in a singly linked list.

Kotlin
class LinkedList<T> {
    private var head: Node<T>? = null
    private var tail: Node<T>? = null
    private var size = 0

    // Method to check if the list is empty
    fun isEmpty(): Boolean = size == 0
    
     // to print nodes in linkedlist
    override fun toString(): String {
        if (isEmpty()) {
            return "Empty list"
        } else {
            return head.toString()
        }
    }
}

Here, a linked list has a ‘head’ (the first node) and a ‘tail’ (the last node). We’ll also store the list’s size in a ‘size’ property.

Now, to use this linked list, we need to store or add some values to it. Otherwise, we’d only be using an empty list. There are three major operations for adding values to a linked list, and we’ll explore each one in more details in the next blog. First, let’s see how to add a new value (or node) to the linked list and then print the result.

Kotlin
// Method to add a new node at the beginning
fun addFirst(data: T) {
    val newNode = Node(data)
    newNode.next = head
    head = newNode
    size++
}

Here, we first create a new node with the passed value. Then, the new node’s next points to the head, and finally, we update the head to point to the newly created node. The same process is repeated whenever we add a new value.

Note: Whenever a new value is added, the list size increases. Therefore, we need to increment the size accordingly.

Now, let’s look at the complete code.

Kotlin
class LinkedList<T> {
    private var head: Node<T>? = null
    private var tail: Node<T>? = null
    private var size = 0

    // Method to check if the list is empty
    fun isEmpty(): Boolean = size == 0

    // to print nodes in linkedlist
    override fun toString(): String {
        if (isEmpty()) {
            return "Empty list"
        } else {
            return head.toString()
        }
    }

    // Method to add a new node at the beginning
    fun addFirst(data: T) {
        val newNode = Node(data)
        newNode.next = head
        head = newNode
        size++
    }
}

Using the Linked List in Kotlin

Let’s put our linked list to the test! Here’s how we can use the LinkedList class:

Kotlin
fun main() {
    val myList = LinkedList<String>()

    println("Is the list empty? ${myList.isEmpty()}")

    myList.addFirst("Kotlin")
    myList.addFirst("Hello")
   
    println(myList) // Output: Hello -> Kotlin -> null

    println("Is the list empty? ${myList.isEmpty()}")
}

Output

Kotlin
Is the list empty? true
Hello -> Kotlin
Is the list empty? false

Conclusion

We’ve explored the key insertion operations in linked lists, along with the foundational concepts and structure that make them an essential part of data management. Understanding these operations provides a solid base for working with linked lists in various scenarios. 

Linked lists might seem daunting, but with a bit of practice, you’ll be using them like a pro.

Understanding Android App Standby Buckets: Resource Limits, Job Execution, and Best Practices

Understanding Android App Standby Buckets: Resource Limits, Job Execution, and Best Practices

With Android’s continuous evolution, power management has become increasingly fine-tuned. Starting from Android 9 (API level 28), Android introduced App Standby Buckets, a dynamic classification system that governs how apps can access system resources based on their usage patterns.

These buckets are essential for developers who rely on background jobs, alarms, or network access to power their app’s core functionality.

In this post, we’ll explore what these buckets are, how they limit your app’s capabilities, and how you can optimize your app to function efficiently within these boundaries.

What Are App Standby Buckets?

App Standby Buckets categorize apps based on how frequently they are used. Android uses a combination of machine learning and user behavior analysis to dynamically assign apps to a bucket.

The buckets help Android prioritize system and battery resources without degrading the user experience.

Here are the five main buckets:

  1. Active — App is currently in use or was used very recently.
  2. Working Set — App used often, possibly running in the background.
  3. Frequent — App used regularly but not daily.
  4. Rare — App used infrequently.
  5. Restricted — App is misbehaving or user has manually restricted it.

Resource Limits by Standby Bucket

Each bucket determines how much access an app has to jobs, alarms, and network activity. Below is a breakdown of the execution time windows for each resource type:

Active

  • Regular Jobs: Up to 20 minutes in a rolling 60-minute period
  • Expedited Jobs: Up to 30 minutes in a rolling 24-hour period
  • Alarms: No execution limits
  • Network Access: Unrestricted

Note (Android 16+): Prior to Android 16, apps in the Active bucket had no job execution limit.

Working Set

  • Regular Jobs: Up to 10 minutes in a rolling 4-hour period
  • Expedited Jobs: Up to 15 minutes in a rolling 24-hour period
  • Alarms: Limited to 10 per hour
  • Network Access: Unrestricted

Frequent

  • Regular Jobs: Up to 10 minutes in a rolling 12-hour period
  • Expedited Jobs: Up to 10 minutes in a rolling 24-hour period
  • Alarms: Limited to 2 per hour
  • Network Access: Unrestricted

Rare

  • Regular Jobs: Up to 10 minutes in a rolling 24-hour period
  • Expedited Jobs: Up to 10 minutes in a rolling 24-hour period
  • Alarms: Limited to 1 per hour
  • Network Access: Disabled

Restricted

  • Regular Jobs: Once per day for up to 10 minutes
  • Expedited Jobs: Up to 5 minutes in a rolling 24-hour period
  • Alarms: One per day (exact or inexact)
  • Network Access: Disabled

Regular vs. Expedited Jobs

Android distinguishes between two types of scheduled jobs:

  • Regular Jobs: Standard background tasks scheduled via JobScheduler or WorkManager.
  • Expedited Jobs: Urgent, high-priority jobs using setExpedited(true) or expedited workers in WorkManager.

Expedited jobs have separate quotas from regular jobs. Once those quotas are exhausted, they may still run under the regular job limits.

Best Practice: Use expedited jobs only for urgent tasks. For everything else, rely on regular job scheduling.

Alarm Limits

Starting with Android 12, alarm limits have tightened:

  • Apps in Working Set or below are subject to hourly or daily caps.
  • Apps in the Restricted bucket can schedule only one alarm per day (exact or inexact).

If your app depends on alarms, consider alternatives like JobScheduler or WorkManager, especially for non-critical tasks.

Network Access Limitations

Apps in the Rare and Restricted buckets cannot access the network unless they are running in the foreground.

This has big implications for features like:

  • Background syncing
  • Data uploads
  • Real-time updates

Make sure to test network-reliant tasks across all bucket conditions.

Android 13+ Update: FCM Quota Change

As of Android 13, the number of high-priority Firebase Cloud Messaging (FCM) messages an app can receive is no longer tied to the standby bucket.

This change benefits apps that rely on push messages (like messaging or ride-sharing apps), ensuring more consistent delivery.

Developer Tips for Bucket Optimization

  1. Track App Usage
     Use UsageStatsManager to monitor your app’s current bucket status.
  2. Leverage WorkManager
     It automatically handles job fallback between expedited and regular quotas.
  3. Respect Background Limits
     Overusing background resources can land your app in the Restricted bucket.
  4. Batch and Defer Tasks
     Reduce battery drain and stay in higher buckets longer by batching non-critical jobs.
  5. Test Across Buckets
     Simulate different standby buckets with this ADB command:
JavaScript
adb shell am set-standby-bucket <package_name> <bucket_name>

Conclusion

App Standby Buckets are a key piece of Android’s power management strategy. By tailoring your background behavior to each bucket’s constraints, you not only improve performance and battery life but also ensure a smoother user experience.

Understanding how these limits work — and respecting them — helps you build apps that are efficient, resilient, and Play Store compliant.

FAQs

Q: Can I manually move my app to a different bucket?
 A: No. The system dynamically assigns apps based on usage. You can only simulate bucket placement during testing.

Q: Do background restrictions help battery life?
 A: Yes, but they can also restrict your app’s background capabilities. Design wisely.

Q: How do I test alarms or jobs under low buckets like Rare or Restricted?
 A: Use ADB to simulate conditions, monitor behavior, and fine-tune fallback strategies.

Encryption Best Practices

Encryption Best Practices & Secure Key Management in Kotlin

Encryption is powerful, but if you don’t manage keys securely or follow best practices, your data might still be at risk. Here’s what you should know when working with encryption in Kotlin, especially for Android apps.

Why Is Key Management So Important?

Think of encryption keys like the keys to your house. If someone steals your key, they can unlock everything — even if your door is super strong.

In encryption:

  • The secret key unlocks your encrypted data.
  • If keys are exposed or hard-coded in your app, attackers can decrypt your info easily.

So, secure key management means generating, storing, and using encryption keys safely.

Best Practices for Managing Encryption Keys in Kotlin/Android

1. Use Android’s Keystore System

Android provides a secure container called the Keystore, where you can safely generate and store cryptographic keys. Keys stored here are hardware-backed and cannot be extracted, making it extremely hard for attackers to steal them.

Here’s a quick way to generate and use a key in Android Keystore:

Kotlin
import android.security.keystore.KeyGenParameterSpec
import android.security.keystore.KeyProperties
import java.security.KeyStore
import javax.crypto.KeyGenerator
import javax.crypto.SecretKey

fun generateKeyInKeystore(alias: String): SecretKey {
    val keyGenerator = KeyGenerator.getInstance(
        KeyProperties.KEY_ALGORITHM_AES,
        "AndroidKeyStore"
    )

    val keyGenParameterSpec = KeyGenParameterSpec.Builder(
        alias,
        KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
    )
        .setBlockModes(KeyProperties.BLOCK_MODE_CBC)
        .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_PKCS7)
        .setRandomizedEncryptionRequired(true)
        .build()

    keyGenerator.init(keyGenParameterSpec)

    return keyGenerator.generateKey()
}

fun getKeyFromKeystore(alias: String): SecretKey {
    val keyStore = KeyStore.getInstance("AndroidKeyStore")
    keyStore.load(null)

    return keyStore.getKey(alias, null) as SecretKey
}

Explanation:

  • generateKeyInKeystore creates a new AES key stored securely inside the Android Keystore.
  • You specify the key’s purpose and encryption parameters.
  • getKeyFromKeystore fetches the stored key when you need it for encryption or decryption.

2. Never Hardcode Keys in Your App

Avoid placing keys as constants in your source code. Hardcoded keys are easily extracted through reverse engineering. Always generate keys at runtime or securely fetch them from the Keystore.

3. Use a Secure Initialization Vector (IV)

IVs should be random and unique for every encryption. Never reuse IVs with the same key. The IV is usually sent alongside the encrypted data, often as a prefix, because it’s needed for decryption.

Here’s how to generate a secure IV in Kotlin:

Kotlin
import java.security.SecureRandom

fun generateRandomIV(): ByteArray {
    val iv = ByteArray(16)
    SecureRandom().nextBytes(iv)
    return iv
}

4. Authenticate Your Data

Encryption protects confidentiality, but attackers can still tamper with ciphertext if you don’t check data integrity. Use authenticated encryption modes like AES-GCM that combine encryption and integrity checks.

Here’s how you might switch to AES-GCM in Kotlin:

Kotlin
import javax.crypto.Cipher
import javax.crypto.SecretKey
import javax.crypto.spec.GCMParameterSpec

fun encryptGCM(message: String, secretKey: SecretKey, iv: ByteArray): ByteArray {
    val cipher = Cipher.getInstance("AES/GCM/NoPadding")
    val spec = GCMParameterSpec(128, iv)  // 128-bit authentication tag
    cipher.init(Cipher.ENCRYPT_MODE, secretKey, spec)
    return cipher.doFinal(message.toByteArray(Charsets.UTF_8))
}

AES-GCM provides both confidentiality and integrity, making your encryption more robust.

5. Protect Your Keys and IVs During Storage and Transmission

  • Store keys only in secure hardware-backed keystores or encrypted storage.
  • When transmitting IVs or ciphertext, use secure channels like HTTPS or encrypted messaging.
  • Always validate the source before decrypting any data.

Conclusion

Encryption alone isn’t enough. Proper key management and following these best practices help you build secure apps that genuinely protect users’ data.

If you develop Android apps or Kotlin projects handling sensitive data, leveraging Android’s Keystore and authenticated encryption modes is a must.

Car Service in AOSP

Car Service in AOSP Explained Simply: For Beginners in Android Automotive

If you’re getting started with Android Automotive OS (AAOS), you’ll quickly run into something called Car Service in AOSP. It’s one of those essential components that makes Android work inside a car — not on your phone, but actually on the car’s infotainment system.

In this guide, we’ll break down Car Service in AOSP step-by-step, explain how it works, what it does, and walk through code examples so you can understand and start building with confidence.

What Is Car Service in AOSP?

In the world of Android Open Source Project (AOSP), Car Service is a system service designed specifically for the automotive version of Android. It’s what bridges the gap between car hardware (like sensors, HVAC, speed, fuel level) and Android apps or services that need that data.

Think of it as the middleman that manages and exposes car hardware features to Android applications safely and consistently.

Why Is Car Service Important in Android Automotive?

  • Access to Vehicle Data: It lets apps access data like speed, gear, HVAC status, fuel level, etc.
  • Security: Only authorized components can access sensitive vehicle data.
  • Abstraction: It hides the car’s hardware complexity behind clean Android APIs.
  • Interoperability: Developers can build apps that work across different car manufacturers.

Core Components of Car Service in AOSP

Let’s simplify the architecture. Here’s how the system flows:

Java
Car Hardware Abstraction Layer (HAL)

      Car Service (System)

  Car APIs / CarApp Library

   Third-party or System Apps

1. Car HAL (Hardware Abstraction Layer)

This is the lowest layer. It connects directly to the car’s ECU and other hardware via vendor-specific code.

2. Car Service

This lives in packages/services/Car in AOSP. It reads from the HAL and exposes data to the rest of Android using APIs.

3. Car APIs

Available via android.car namespace. Developers use these in their apps to access vehicle data in a clean, safe way.

Where Is Car Service Code in AOSP?

You’ll find the Car Service source code here:

Java
/packages/services/Car/

Key files:

  • CarService.java – The main system service.
  • CarPropertyService.java – Handles access to vehicle properties.
  • VehicleHal.java – Bridges to the HAL.
  • VehicleStubHal.java – A fake HAL used for emulators and testing.

Let’s Look at Code: How Car Service Works

Here’s a super simplified example from CarService.java:

Java
public class CarService extends Service {
    @Override
    public void onCreate() {
        super.onCreate();
        Log.i(TAG, "CarService started");

        // Initialize vehicle property manager
        mCarPropertyService = new CarPropertyService();
        mCarPropertyService.init();

        // Register the service to ServiceManager
        ServiceManager.addService("car_service", this);
    }
}

Here,

  • The system starts CarService during boot.
  • It initializes CarPropertyService which talks to the HAL.
  • It registers itself to ServiceManager so apps can bind to it.

This is what makes vehicle data accessible through Android’s Car APIs.

Permissions and Security

Accessing vehicle data isn’t open to everyone — and that’s a good thing.

You’ll need permissions like:

XML
<uses-permission android:name="android.car.permission.CAR_SPEED"/>
<uses-permission android:name="android.car.permission.CAR_ENGINE"/>

And apps must be system-signed or granted via whitelist in car_service.xml.

Accessing Vehicle Data from an App

Here’s how a developer might access the vehicle speed:

Java
Car car = Car.createCar(context);
CarPropertyManager propertyManager = (CarPropertyManager) car.getCarManager(Car.PROPERTY_SERVICE);

float speed = (Float) propertyManager.getProperty(
    VehiclePropertyIds.PERF_VEHICLE_SPEED, 0);

Here,

  • Connect to the Car service.
  • Get the CarPropertyManager.
  • Read the vehicle speed using a property ID.

Testing Car Service Without a Real Car

Don’t have an actual car ECU to test with? Use the VehicleStubHal!

In VehicleStubHal.java, you can simulate data:

Java
@Override
public void getProperty(...) {
    if (propertyId == VehiclePropertyIds.PERF_VEHICLE_SPEED) {
        return new VehiclePropValue(..., 42.0f); // Fake speed
    }
}

Perfect for development and debugging on emulators.

Customizing Car Service for Your OEM

If you’re building a custom ROM for a car, you’ll likely need to:

  1. Implement your own Car HAL in hardware/interfaces/automotive/vehicle.
  2. Customize Car Service components in /packages/services/Car/.
  3. Define new vehicle properties if needed.

Make sure you align with VHAL (Vehicle HAL) AIDL interface definitions.

Summary: Key Takeaways

  • Car Service in AOSP is the system layer that gives Android Automotive access to vehicle hardware.
  • It abstracts complex car hardware into simple APIs.
  • Apps use the android.car APIs to safely read and respond to vehicle state.
  • Testing is possible with stub HALs — no real car needed!
  • It’s secure, modular, and extensible.
Internal DSLs

Building Internal DSLs in Your Favorite Programming Language

Software isn’t just about solving problems — it’s about making solutions expressive and readable. One powerful way to do this is by building Internal DSLs (Domain-Specific Languages).

If you’ve ever wished your code could read more like English (or your team’s own domain language), then an internal DSL might be the tool you’re looking for.

What Are Internal DSLs?

An Internal DSL is a mini-language built inside a general-purpose programming language.
 Instead of creating a brand-new compiler or parser, you use your existing language’s syntax, features, and runtime to express domain logic in a more natural way.

Think of it as customizing your language for your project’s needs without leaving the language ecosystem.

Internal DSLs vs. External DSLs

It’s easy to confuse between them:

  • External DSLs → standalone languages (e.g., SQL, HTML) with their own parser.
  • Internal DSLs → embedded within another language (e.g., a Ruby RSpec test reads like plain English).

Internal DSLs are faster to implement because you don’t reinvent the wheel — you leverage the host language.

Why Build an Internal DSL?

  • Improved readability — Code speaks the language of the domain.
  • Fewer mistakes — Constraints baked into the syntax reduce errors.
  • Faster onboarding — New developers learn the DSL instead of the whole codebase first.
  • Reusability — The same DSL can be applied across multiple projects.

Internal DSLs

As opposed to external DSLs, which have their own independent syntax, An internal DSL (Domain-Specific Language) is a type of DSL that is embedded within a general-purpose programming language and utilizes the host language’s syntax and constructs. In other words, it’s not a separate language but rather a specific way of using the main language to achieve the benefits of DSLs with an independent syntax. The code written in an internal DSL looks and feels like regular code in the host language but is structured and designed to address a particular problem domain more intuitively and efficiently.

To compare the two approaches, let’s see how the same task can be accomplished with an external and an internal DSL. Imagine that you have two database tables, Customer and Country, and each Customer entry has a reference to the country the customer lives in. The task is to query the database and find the country where the majority of customers live. The external DSL you’re going to use is SQL; the internal one is provided by the Exposed framework (https://github.com/JetBrains/Exposed), which is a Kotlin framework for database access.

Here’s a comparison of the two approaches:

External DSL (SQL):

SQL
SELECT Country.name, COUNT(Customer.id)
FROM Country
JOIN Customer
ON Country.id = Customer.country_id
GROUP BY Country.name
ORDER BY COUNT(Customer.id) DESC
LIMIT 1

Internal DSL (Kotlin with Exposed):

Kotlin
(Country join Customer)
    .slice(Country.name, Count(Customer.id))
    .selectAll()
    .groupBy(Country.name)
    .orderBy(Count(Customer.id), isAsc = false)
    .limit(1)

As you can see, the internal DSL version in Kotlin closely resembles regular Kotlin code, and the operations like slice, selectAll, groupBy, and orderBy are just regular Kotlin methods provided by the Exposed framework. The query is expressed using these methods, making it easier to read and write than the SQL version. Additionally, the results of the query are directly delivered as native Kotlin objects, eliminating the need to manually convert data from SQL query result sets to Kotlin objects.

The internal DSL approach provides the advantages of DSLs, such as improved readability and expressiveness for the specific domain, while leveraging the familiarity and power of the host language. This combination makes the code more maintainable, less error-prone and allows domain experts to work more effectively without the need to learn a completely separate syntax.

A Kotlin HTML Builder DSL

Kotlin’s syntax is well-suited for DSLs:

Kotlin
html {
    head {
        title { +"My Page" }
    }
    body {
        h1 { +"Hello World" }
        p { +"This is my internal DSL example." }
    }
}

Why this works well:

  • Extension functions make the code feel like part of the language.
  • Lambda with receiver allows nesting that mirrors HTML structure.

Common Pitfalls

  • Overcomplication — If your DSL is harder to learn than the original language, it fails its purpose.
  • Poor documentation — DSLs still need guides and examples.
  • Leaky abstractions — Avoid exposing too much of the host language if it breaks immersion.

Picking the Right Language

Some languages are more DSL-friendly due to flexible syntax and operator overloading:

  • Ruby — Very popular for DSLs (e.g., Rails migrations).
  • Scala — Strong type system + functional features.
  • Kotlin — Extension functions + lambda with receiver make DSLs clean.
  • Python — Simple syntax and dynamic typing make it easy to prototype.
  • JavaScript — Template literals and functional style work well.

That said — you can build an internal DSL in almost any language.

Real-World Examples of Internal DSLs

  • RSpec (Ruby) — Readable test cases.
  • Gradle Kotlin DSL — Build scripts that feel native in Kotlin.
  • Jinja Filters in Python — Embedded in templates but powered by Python.

Conclusion

Building Internal DSLs is about making code read like conversation — clear, concise, and tuned to the domain.
 Whether you’re writing tests, building configs, or defining workflows, a well-crafted DSL can cut cognitive load and boost productivity.

Start small, test ideas, and grow your DSL as your team uses it. Before long, you might find that the DSL becomes one of the most beloved parts of your codebase.

What Is VHDX in Virtualization?

What Is VHDX in Virtualization? A Complete Overview for IT Pros

If you’re working in IT or managing virtual environments, you’ve probably come across the term VHDX. But what exactly is it, why does it matter, and how can you use it effectively? In this post, we’ll break down VHDX in virtualization in simple way, walk through its benefits, explain how it works internally, and even cover some practical examples with code snippets. By the end, you’ll know exactly how VHDX fits into your virtualization strategy.

What Is VHDX?

VHDX stands for Virtual Hard Disk v2. It’s a disk image file format introduced with Windows Server 2012 as an upgrade to the older VHD (Virtual Hard Disk) format.

Think of VHDX as a virtual hard drive — just like a physical disk in your computer, but stored as a single file on your host system. Inside it, you can install operating systems, store data, and run applications — all within a virtual machine (VM).

The key difference: VHDX supports modern workloads. It’s more resilient, can handle much larger disk sizes, and protects against corruption better than its predecessor.

Why VHDX Matters in Virtualization

Virtualization thrives on efficiency and flexibility. Here’s why VHDX in virtualization is so valuable:

  • Bigger capacity: Supports up to 64 TB compared to VHD’s 2 TB limit.
  • Improved performance: Handles large block sizes better, ideal for workloads like databases.
  • Resilience: Includes logging to protect against data corruption during crashes or power failures.
  • Alignment with modern storage: Optimized for large sector disks (4 KB).
  • Dynamic resizing: VHDX files can grow or shrink without downtime.

For IT pros, this means fewer limits, more stability, and better handling of enterprise-scale virtual machines.

VHD vs. VHDX: Quick Comparison

If you’re setting up new virtual environments, VHDX should be your default choice unless you need compatibility with legacy systems.

How VHDX Works in Virtualization

When you create a new VM in Hyper-V or another virtualization platform, you’re usually asked to attach a virtual hard disk. This disk is stored as a .vhdx file. The guest OS inside your VM sees it as a standard hard drive.

Under the hood, the host system manages all the reads/writes to the .vhdx file and ensures that data is written correctly—even during unexpected events like power loss.

Here’s the important part: the VM only sees logical space. The host decides whether to reserve that space upfront or let the file grow over time. This brings us to the two disk types you can choose: fixed-size and dynamic.

Fixed vs. Dynamic VHDX

Fixed-size VHDX

  • Allocates the full size immediately. If you create a 200 GB fixed disk, the host’s storage instantly reserves 200 GB.
  • Performance is predictable and slightly faster.
  • Best for mission-critical workloads like databases.

Dynamic VHDX

  • Starts small and grows as data is added. A 200 GB dynamic disk might only consume 10 GB on the host if that’s all the VM is using.
  • More space-efficient and flexible.
  • Best for general-purpose or test/dev environments.

This flexibility is why logical allocation is common. If every VM grabbed its full allocation upfront, host storage would be consumed very quickly, even if most VMs weren’t using their full disks.

Why Not Just Use Physical Allocation Always?

It’s a fair question: why let VMs think they have more space than the host physically has?

  • Efficiency: Most VMs never use their full allocated size. Logical allocation avoids wasting host storage.
  • Scalability: In enterprise environments with hundreds of VMs, fixed pre-allocation would demand massive upfront storage. Dynamic allocation enables faster scaling.
  • Performance Trade-off: Fixed disks give the best speed and predictability, but dynamic disks offer flexibility. IT admins choose based on workload needs.
  • Abstraction: Virtualization is about creating an illusion. The VM doesn’t need to know the true storage situation — it just needs a disk to run.

Note: Over-provisioning (promising more logical space than you physically have) can be risky. If all VMs try to use their full allocation at once, the host can run out of storage. That’s why monitoring is essential.

A Quick Analogy

Think of your physical disk as an airplane with 200 seats.

  • If you create fixed disks, it’s like selling only 200 tickets — safe, predictable.
  • If you use dynamic disks, it’s like selling 220 tickets, betting not everyone will show up.
  • Usually, it works. But if everyone does show up (all VMs demand full storage), you’ll have a problem unless you planned ahead.

Creating a VHDX File

You can create and manage VHDX files using PowerShell, which makes automation easy for IT admins.

JavaScript
# Create a new dynamic VHDX file with a maximum size of 50GB
New-VHD -Path "C:\VMs\Disk1.vhdx" -SizeBytes 50GB -Dynamic

# Attach the VHDX file to a virtual machine
Add-VMHardDiskDrive -VMName "TestVM" -Path "C:\VMs\Disk1.vhdx"

Explanation:

  • New-VHD: Creates a new virtual hard disk.
  • -Path: Location where the .vhdx file will be stored.
  • -SizeBytes: Maximum size (50GB in this case).
  • -Dynamic: The file grows as data is added, instead of consuming the full 50GB immediately.
  • Add-VMHardDiskDrive: Attaches the new disk to the VM named TestVM.

This simple script saves time compared to clicking through the Hyper-V Manager GUI.

Best Practices for Using VHDX in Virtualization

  1. Pick the right type: Fixed for performance-critical workloads, dynamic for flexibility.
  2. Back up regularly: VHDX is resilient, but backups are still mandatory.
  3. Watch over-provisioning: Dynamic disks can silently grow and consume host storage.
  4. Convert old VHDs: Use PowerShell’s Convert-VHD to move from legacy VHD to VHDX.
  5. Use checkpoints wisely: Helpful for testing, but they can bloat disk usage.

Conclusion

VHDX in virtualization is the modern standard for virtual hard disks. It offers scalability, resilience, and efficiency that older formats can’t match. For IT professionals managing enterprise workloads, switching to VHDX ensures that your virtual machines are future-ready.

Key takeaway: VHDX doesn’t magically create storage. It allocates logical space to give flexibility and efficiency, while physical space is consumed only as needed. This balance is what makes virtualization powerful — but it also requires careful monitoring and planning.

Salts vs. Pepper

Salts vs. Pepper: The Unsung Heroes of Secure Password Hashing

When we talk about password security, the conversation usually goes straight to hashing algorithms — things like SHA-256, bcrypt, or Argon2.
 But there are two lesser-known players that can make or break your defenses: salts and pepper.

Think of them as seasoning for your password hashes — not for flavor, but for security.

Why Password Hashing Alone Isn’t Enough

Hashing is like putting your password through a one-way blender — you can’t (easily) get the original password back.
 But if attackers get your hashed password database, they can still use rainbow tables or brute-force attacks to figure out the original passwords.

That’s where salts and pepper come in.
 They make every hash unique and harder to crack — even if someone has your database.

Salts vs. Pepper: What’s the Difference?

Salts

  • A random value added to each password before hashing.
  • Stored alongside the hash in the database.
  • Makes it impossible for attackers to use precomputed hash tables.
  • Every user gets a unique salt.

Pepper

  • A secret value added to the password before hashing.
  • Not stored in the database — kept separately (e.g., in environment variables or secure key vaults).
  • Even if the attacker steals your database, they can’t crack hashes without the pepper.

In short:

  • Salt is public but unique per password
  • Pepper is secret and the same for all passwords (or sometimes per user, but still hidden).

Kotlin Example: Salting and Peppering Passwords

Let’s see this in Kotlin. We’ll use the MessageDigest API for hashing (for simplicity), though in real production you should use stronger libraries like BCrypt or Argon2.

Kotlin
import java.security.MessageDigest
import java.security.SecureRandom
import java.util.Base64

object PasswordHasher {

    // Generate a random salt for each password
    fun generateSalt(length: Int = 16): String {
        val random = SecureRandom()
        val salt = ByteArray(length)
        random.nextBytes(salt)
        return Base64.getEncoder().encodeToString(salt)
    }

    // Your secret pepper - should be stored securely (e.g., env variable)
    private const val PEPPER = "SuperSecretPepperValue123!"

    // Hash with salt + pepper
    fun hashPassword(password: String, salt: String): String {
        val saltedPepperedPassword = password + salt + PEPPER
        val digest = MessageDigest.getInstance("SHA-256")
        val hashBytes = digest.digest(saltedPepperedPassword.toByteArray(Charsets.UTF_8))
        return Base64.getEncoder().encodeToString(hashBytes)
    }

    // Verify password
    fun verifyPassword(inputPassword: String, storedSalt: String, storedHash: String): Boolean {
        val inputHash = hashPassword(inputPassword, storedSalt)
        return inputHash == storedHash
    }
}

fun main() {
    val password = "MySecurePassword!"

    // 1. Generate salt
    val salt = PasswordHasher.generateSalt()

    // 2. Hash password with salt + pepper
    val hashedPassword = PasswordHasher.hashPassword(password, salt)

    println("Salt: $salt")
    println("Hash: $hashedPassword")

    // 3. Verify
    val isMatch = PasswordHasher.verifyPassword("MySecurePassword!", salt, hashedPassword)
    println("Password match: $isMatch")
}

Salt Generation

  • We create a random salt using SecureRandom.
  • This ensures no two hashes are the same, even if passwords are identical.

Pepper Usage

  • The pepper is stored outside the database, often in environment variables or secure vaults.
  • It’s the “secret ingredient” that attackers won’t see if they only have the database.

Hashing

  • We combine the password + salt + pepper before hashing with SHA-256.
  • In production, replace SHA-256 with bcrypt or Argon2 for better resistance against brute force.

Verification

  • When a user logs in, we retrieve the stored salt, hash the provided password with the same pepper, and compare the results.

Best Practices for Salts and Pepper

  • Always use a unique salt for each password. Never reuse salts.
  • Store salts with the hash in the database.
  • Keep pepper secret — in an environment variable, key management system, or hardware security module.
  • Use a slow, memory-hard hashing algorithm like bcrypt, scrypt, or Argon2.
  • Rotate peppers periodically for maximum security.
  • Never hard-code pepper in your source code for production.

Why Salts and Pepper Matters

Attackers thrive on shortcuts. Salts remove the shortcut of using rainbow tables. Pepper blocks attackers even if they have your entire password database.
 Together, they make your password security significantly harder to break.

Conclusion

When it comes to security, the little details — like salts and pepper — make a big difference.
 Hashing without them is like locking your front door but leaving the window wide open.
 So next time you store a password, make sure it’s seasoned with both.

Fibonacci in Kotlin

Fibonacci in Kotlin: Recursion, Loops & Dynamic Programming (Complete Guide)

The Fibonacci sequence isn’t just math trivia — it’s a timeless example used in coding interviews, algorithm practice, and real-world software optimization. In this guide, we’ll explore how to implement Fibonacci in Kotlin using: Recursion — Easy to grasp, but not the fastest. Loops — Simple and efficient. Dynamic Programming — Optimized for large numbers. What is the Fibonacci Sequence? The Fibonacci sequence...

Membership Required

You must be a member to access this content.

View Membership Levels

Already a member? Log in here
NTFS File System in Windows

What is NTFS File System in Windows?

If you’ve ever saved a file on your Windows computer, you’ve already worked with a file system — even if you didn’t realize it. One of the most widely used formats today is the NTFS File System in Windows. But what exactly is it, and why does it matter? 

Let’s break it down.

What is a File System?

A file system is like a digital organizer. It tells your operating system (Windows, in this case) how to store, manage, and retrieve files on your hard drive or SSD. Without a file system, your computer would have no idea where files are located or how to access them.

Windows supports multiple file systems like FAT32, exFAT, and NTFS. Among them, NTFS (New Technology File System) is the default for modern Windows systems.

A Quick Look at NTFS

Introduced by Microsoft in 1993 with Windows NT, NTFS File System in Windows was designed to replace the older FAT systems. Over time, it became the go-to choice because it offered better security, reliability, and support for larger storage devices.

Here’s what makes NTFS stand out:

  • Supports large files — You can store files much bigger than 4 GB (a limitation in FAT32).
  • File permissions and security — NTFS allows you to set who can read, write, or execute a file.
  • Journaling — Keeps a log of changes, which helps recover data in case of sudden power loss or crashes.
  • Compression and encryption — Saves disk space and adds a layer of protection.

Why Does Windows Use NTFS by Default?

Windows uses NTFS because it’s built for modern computing. Whether you’re storing thousands of small text files or massive video projects, NTFS can handle it. Its security features also make it ideal for professional environments where protecting sensitive data is a must.

How to Check if Your Drive is Using NTFS

Want to see if your computer is using NTFS? It’s simple:

  1. Open File Explorer.
  2. Right-click on the drive (like C:) and choose Properties.
  3. Under the General tab, look for File System.

If it says NTFS, you’re good to go.

NTFS in Action: Formatting a Drive with NTFS

Sometimes you may need to format a USB drive or external hard drive with NTFS. Here’s how to do it using the Command Prompt:

Bash
format E: /FS:NTFS /Q /V:MyDrive

Here,

  • E: → The drive letter you want to format.
  • /FS:NTFS → Tells Windows to use the NTFS File System.
  • /Q → Quick format (saves time).
  • /V:MyDrive → Assigns a label (name) to the drive.

Warning: Formatting erases all data on the drive. Make sure you back up files before running this command.

NTFS vs FAT32 vs exFAT

It’s worth knowing how NTFS compares to other systems:

  • FAT32 — Works everywhere (Windows, macOS, Linux, game consoles), but can’t handle files larger than 4 GB.
  • exFAT — Great for external drives and large files, but doesn’t offer NTFS-level security.
  • NTFS — Perfect for Windows internal drives thanks to its security, journaling, and efficiency.

When Should You Use NTFS?

Use NTFS if:

  • You’re running Windows as your main operating system.
  • You need to secure files with permissions or encryption.
  • You’re working with large drives (over 32 GB).
  • You need stability for professional or personal data storage.

Conclusion

The NTFS File System in Windows is more than just a storage format — it’s the backbone that keeps your data safe, organized, and accessible. Whether you’re casually browsing the web, editing videos, or managing sensitive business files, NTFS ensures your system runs smoothly and securely.

If you’ve ever wondered why your Windows PC “just works” when it comes to storing files, now you know — NTFS is doing the heavy lifting behind the scenes.

GPT vs MBR

GPT vs MBR: Which Partition Style Should You Choose in 2025?

If you’ve ever installed Windows or set up a new hard drive or SSD, you’ve probably come across the terms MBR (Master Boot Record) and GPT (GUID Partition Table). At first glance, they might seem like just another technical detail to skip over, but choosing the right partition style can affect your system’s performance, reliability, and even whether your computer boots at all.

In this guide, I’ll break down what MBR and GPT really mean, their pros and cons, how to check which one your system is using, and when you should pick one over the other. By the end, you’ll have the clarity to make the right choice for your setup.

What is MBR (Master Boot Record)?

MBR is the older partitioning scheme, introduced way back in 1983 with IBM PCs. It stores both the bootloader and the partition table in the very first sector of the disk.

Key characteristics:

  • Supports disk sizes up to 2 TB only.
  • Allows up to 4 primary partitions (or 3 primary + 1 extended with multiple logical drives).
  • Works with Legacy BIOS systems.

Limitations:

  • Not suitable for modern large-capacity drives.
  • If the MBR sector gets corrupted, your entire disk might become unreadable.
  • Fewer partitions and less flexibility compared to GPT.

What is GPT (GUID Partition Table)?

GPT is the modern replacement for MBR, introduced as part of the UEFI (Unified Extensible Firmware Interface) standard. Instead of storing critical information in a single sector, GPT keeps multiple copies across the disk, making it more reliable.

Key characteristics:

  • Supports disks larger than 2 TB (theoretical limit is 9.4 zettabytes).
  • Can hold up to 128 partitions on Windows (even more on Linux).
  • Works with UEFI firmware systems.
  • Uses CRC32 checksums to detect and correct data corruption.

Advantages:

  • Perfect for modern SSDs and HDDs.
  • More resilient to corruption thanks to redundant partition tables.
  • Required if you want to boot Windows in UEFI mode.

How to Check if Your Disk is MBR or GPT

On Windows

Method 1: Disk Management

  1. Press Win + X → open Disk Management.
  2. Right-click your disk (e.g., “Disk 0”) → PropertiesVolumes.
  3. Look for Partition Style → it will say either Master Boot Record (MBR) or GUID Partition Table (GPT).

Method 2: Command Prompt

Open Command Prompt as Administrator.

Type:

Bash
diskpart list disk

If there’s a star (*) under the GPT column, your disk is GPT. If blank, it’s MBR.

On Linux

Method 1: Using lsblk

Bash
lsblk -f
  • dos = MBR
  • gpt = GPT

Method 2: Using parted

Bash
sudo parted -l
  • Shows Partition Table: msdos (MBR) or Partition Table: gpt.

How to Convert Between MBR and GPT

Windows

  • MBR → GPT without data loss: Use Microsoft’s built-in MBR2GPT tool (Windows 10 version 1703 or later).
Bash
mbr2gpt /validate /disk:0 /allowFullOS mbr2gpt /convert /disk:0 /allowFullOS
  • After conversion, switch your BIOS mode from Legacy to UEFI.
  • GPT → MBR: Requires deleting all partitions. Backup your data, then reinitialize the disk as MBR in Disk Management.

Linux

  • Use gdisk to convert. For example:
Bash
sudo gdisk /dev/sda

GPT ↔ MBR conversion is possible, but keep in mind:

  • You cannot safely convert if the disk has more than 4 partitions or partitions larger than 2 TB.
  • Always back up before making changes.
  • Tools like GParted, AOMEI Partition Assistant, or EaseUS Partition Master also offer safe conversion options.

When Should You Use GPT or MBR?

Here’s a simple thumb rule:

Choose GPT if:

  • Your disk is larger than 2 TB.
  • You need more than 4 partitions
  • Your PC uses UEFI firmware.
  • You want better data reliability and corruption protection.
  • You’re installing Windows 10/11, Linux, or macOS on modern hardware.

Choose MBR if:

  • You’re using an older computer that only supports Legacy BIOS.
  • Your drive is 2 TB or smaller.
  • You need compatibility with older operating systems (Windows 7 32-bit, XP, older Linux distributions).
  • You’re setting up an external drive for use with very old devices.

How to Check if Your System Uses UEFI or BIOS

Since GPT works with UEFI and MBR works with BIOS, it’s useful to confirm which firmware your computer uses.

On Windows:

Press Win + R → type msinfo32 → press Enter.

Look for BIOS Mode:

  • UEFI → your system supports GPT.
  • Legacy → your system supports MBR only.

On Linux:

Check the presence of EFI variables:

Bash
ls /sys/firmware/efi
  • If the folder exists, your system is booted in UEFI mode.
  • If not, it’s using Legacy BIOS.

Conclusion

The debate between GPT vs MBR isn’t really a debate anymore — it’s about compatibility. GPT is clearly the better option for modern systems, offering support for large drives, more partitions, and better resilience. That said, MBR still has a place in older hardware or for situations where compatibility matters more than flexibility.

My recommendation:

  • If you’re installing a new OS on modern hardware → go GPT.
  • If you’re maintaining or repairing an old system → stick with MBR.

Making the right choice ensures smoother performance, fewer headaches, and future-proof storage for your data.

error: Content is protected !!