Amol Pawar

kotlin native

What Is Kotlin Native? A Complete Beginner’s Guide to Cross-Platform Power

If you’ve ever wanted to write code once and run it across multiple platforms without dragging along a heavy runtime, Kotlin Native is worth a look.

It lets you take Kotlin beyond the JVM and compile it into real native binaries. That changes how your apps start, run, and scale across platforms.

Let’s break it down in a simple, practical way.

What Is Kotlin Native?

Kotlin Native is a technology that compiles Kotlin code directly into native machine code.

So instead of this:

  • Kotlin → bytecode → JVM → runs on device

You get this:

  • Kotlin → native binary → runs on OS

Normally, Kotlin runs on the JVM, where your code is compiled into bytecode and executed by the Java Virtual Machine.

Kotlin Native skips the JVM entirely. It uses LLVM to compile Kotlin into a standalone executable (like a .exe on Windows or a framework on iOS) that runs directly on the operating system.

This means there’s no JVM runtime or bytecode layer involved — just your compiled program running natively on the platform.

Why Kotlin Native Matters

Most cross-platform tools rely on some kind of bridge or runtime. Kotlin Native skips that.

Here’s what that gives you in practice:

Cross-platform without rewriting logic

You can reuse core logic across iOS, desktop, and other platforms, while still building native UIs.

No runtime dependency

Your app is compiled ahead of time. It runs as a standalone executable.

Faster startup

Since there’s no runtime to spin up, apps launch quickly.

Same Kotlin language

You’re still writing Kotlin. No need to switch mental models.

How Kotlin Native Works

Kotlin Native uses ahead-of-time (AOT) compilation. In the Kotlin Native ecosystem, the compiler handles things differently:

  1. Backend: It uses LLVM, the same powerful technology used by languages like Swift and C++.
  2. Interoperability: It “talks” natively to C, Objective-C, and Swift.
  3. No Garbage Collector (Traditional): It uses a specialized memory management system designed to be efficient across different platforms.

In simple terms:

  1. You write Kotlin code
  2. The Kotlin Native compiler turns it into machine code
  3. You get a platform-specific binary
  4. That binary runs directly on the OS

No extra runtime involved.

A Simple Kotlin Example

Kotlin
fun main() {
    println("Hello, Kotlin Native!")
}
  • main() is the entry point
  • println() prints to the console
  • When compiled with Kotlin Native, this becomes a native executable

Nothing special in the syntax. That’s the point.

Let’s See Some Code!

Working with Functions in Kotlin Native

To understand how Kotlin Native feels, let’s look at a simple example. Imagine we want a shared piece of code that says “Hello” but identifies which platform it’s running on.

The “Expect” and “Actual” Pattern

Kotlin uses a unique system to handle platform-specific features.

Kotlin
// This goes in the "Common" folder
expect fun getPlatformName(): String

fun greet(): String {
    return "Hello from Kotlin Native on ${getPlatformName()}!"
}
  • expect: This tells the compiler, “I promise I will provide the actual implementation for this function on every specific platform (iOS, Windows, etc.).”

Now, here is how the iOS-specific implementation might look:

Kotlin
// This goes in the "iosMain" folder
import platform.UIKit.UIDevice

actual fun getPlatformName(): String {
    return UIDevice.currentDevice.systemName() + " " + UIDevice.currentDevice.systemVersion
}
  • actual: This is the real implementation.
  • Notice the import platform.UIKit.UIDevice? This is Kotlin Native talking directly to Apple’s UIKit! You are using Kotlin to access iOS system APIs.

Memory Management in Kotlin Native

This used to be one of the trickier parts of Kotlin Native.
 Older versions had strict rules around sharing objects between threads, requiring object freezing and making concurrency restrictive.

That’s changed.
 Kotlin Native now has a more relaxed memory model, allowing you to share data across threads more naturally without fighting the system.

It also uses a garbage collector, so you don’t need manual memory management like in C++.

It’s still not identical to JVM behavior, but it’s much easier to work with than before. While concurrency is more flexible now, you’re responsible for ensuring thread safety when working with shared mutable state.

For advanced scenarios like C interop, kotlinx.cinterop provides access to raw pointers—but this is rarely needed in typical development.

Where Kotlin Native Fits

You’ll rarely use Kotlin Native by itself. It’s usually part of Kotlin Multiplatform.

Typical use cases:

  • Sharing business logic between Android and iOS
  • Writing cross-platform libraries
  • Building lightweight backend tools
  • Working on embedded or edge devices 

The main idea is simple: write logic once, reuse it where it makes sense.

Advantages of Kotlin Native

  • Fast startup
  • No runtime dependency
  • Smaller footprint
  • Can interop with C libraries
  • Good fit for performance-sensitive code

Limitations to Know

It’s not a silver bullet.

  • Ecosystem is smaller than JVM
  • Some libraries won’t work out of the box
  • Debugging can feel rough at times
  • Build times can be slow

Most of these are improving, but they’re still worth keeping in mind.

When Should You Use Kotlin Native?

Use it when:

  • You’re building a cross-platform app with shared logic
  • You need native performance
  • You’re targeting iOS alongside Android

Skip it if:

  • Your app is Android-only
  • You rely heavily on JVM-specific libraries

Getting Started

A simple way to begin:

  1. Set up a Kotlin Multiplatform project
  2. Add native targets (iOS, macOS, etc.)
  3. Write shared Kotlin code
  4. Compile using Kotlin Native

If you’re using IntelliJ IDEA, most of this is already streamlined.

Tips for Beginners

  • Start with small examples
  • Focus on shared logic first
  • Avoid pulling in too many dependencies early
  • Test on real targets when possible

Conclusion

Kotlin Native isn’t trying to replace everything — it fills a powerful gap.
 It lets you share logic across platforms while keeping native performance and experience.

If you already know Kotlin, expanding to iOS and desktop is more accessible than ever.

It’s time to think beyond Android development — and start thinking in terms of native, cross-platform efficiency.

Kotlin Native CInterop

Kotlin Native CInterop Explained: Seamlessly Call C Code Like a Pro

If you’ve ever needed to use an existing C library in a Kotlin project, you’ve probably run into the gap between modern Kotlin and low-level native code. That’s where Kotlin Native CInterop comes in.

This guide breaks it down in a practical way. No fluff, just what you need to understand how it works and how to use it.

What is Kotlin Native CInterop?

Kotlin Native CInterop is a tool that lets you call C (and Objective-C) code directly from Kotlin/Native.

In simple terms:

  • You reuse existing C libraries
  • Kotlin generates bindings for you
  • You call native functions like regular Kotlin functions

It handles a lot of the heavy lifting, including type mapping and function access.

When Should You Use Kotlin Native CInterop?

Use it when:

  • You need system-level APIs written in C
  • You want to reuse a stable C library
  • You’re building with Kotlin Multiplatform
  • Performance matters and native code already exists

Common examples include crypto libraries, OS-level APIs, or legacy integrations.

How It Works (Quick Overview)

The workflow is straightforward:

  1. Provide a C header file
  2. Create a .def file
  3. Kotlin generates bindings
  4. Call the functions in Kotlin

Once set up, it feels surprisingly natural.

Step-by-Step Setup

1. Create a C Library

Kotlin
// math_utils.h
#ifndef MATH_UTILS_H
#define MATH_UTILS_H

int add(int a, int b);
int multiply(int a, int b);

#endif
// math_utils.c
#include "math_utils.h"

int add(int a, int b) {
    return a + b;
}

int multiply(int a, int b) {
    return a * b;
}

2. Create a Definition File

D
headers = math_utils.h
compilerOpts = -I.

Save it as: math.def

This tells Kotlin Native CInterop what to process.

3. Configure Gradle

Kotlin
kotlin {
    linuxX64("native") {
        compilations.getByName("main") {
            cinterops {
                val math by creating {
                    defFile(project.file("src/nativeInterop/cinterop/math.def"))
                }
            }
        }
    }
}

4. Build the Project

./gradlew build

This generates the bindings from your C headers.

Calling C Code from Kotlin

Once everything is set up, using the functions is simple:

Kotlin
import math.*

fun main() {
    val result = add(3, 5)
    val product = multiply(4, 6)

    println("Sum: $result")
    println("Product: $product")
}

There’s no special syntax here. Kotlin Native CInterop exposes the C functions directly.

Type Mapping Basics

Kotlin maps common C types automatically:

Example: C String

C
const char* greet() {
    return "Hello from C!";
}

Kotlin:

Kotlin
val message = greet()?.toKString()
println(message)

You’ll need toKString() because C strings are pointers.

Memory Management (Important)

C uses manual memory management. Kotlin does not.

Kotlin Native provides memScoped to keep things safe:

Kotlin
import kotlinx.cinterop.*

fun example() = memScoped {
    val ptr = alloc<IntVar>()
    ptr.value = 10
    println(ptr.value)
}

Think of memScoped as a safe boundary for temporary native allocations.

Working with Pointers

Pointers show up often in C APIs.

C
void increment(int* value) {
    (*value)++;
}

Kotlin:

Kotlin
memScoped {
    val num = alloc<IntVar>()
    num.value = 5

    increment(num.ptr)
    println(num.value) // 6
}

Key idea:

  • alloc<T>() creates memory
  • .ptr gives you a pointer

Structs

C structs map cleanly to Kotlin.

C
typedef struct {
    int x;
    int y;
} Point;

Kotlin:

Kotlin
memScoped {
    val point = alloc<Point>()
    point.x = 10
    point.y = 20
    
    println("x: ${point.x}, y: ${point.y}")
}

You interact with them like regular objects.

Common Pitfalls

Memory leaks
Use memScoped or manage allocations carefully.

Wrong include paths
Double-check your .def file.

Macros not working
Some macros don’t translate well. You may need manual wrappers.

Platform differences
Behavior can vary between Linux, macOS, etc.

Best Practices

  • Keep headers small and focused
  • Wrap complex C logic in simpler functions
  • Test interop boundaries thoroughly
  • Use Kotlin for business logic, C for low-level work
  • Document what your bindings expose

Conclusion

Kotlin Native CInterop makes calling C code surprisingly straightforward once you understand the basics.

You don’t need to be a C expert. You just need to:

  • Understand how headers work
  • Set up the .def file correctly
  • Know how Kotlin maps types and memory

Kotlin Native CInterop lets you combine Kotlin’s developer experience with the power of native libraries.

You don’t have to rewrite working C code. You just plug it in and move on.

visibilityThreshold

visibilityThreshold in Spring Animations: How It Works in Jetpack Compose

When you start working with animations in Jetpack Compose, the spring() API feels intuitive at first—until you notice something odd: animations don’t seem to fully stop. They get very, very close to the target value… but technically never reach it.

That’s where visibilityThreshold quietly does some of the most important work.

This article walks you through what it is, why it matters, and how to use it correctly across different data types like Dp, Float, and Offset. Along the way, we’ll build real composable examples you can run and experiment with.

Why Spring Animations Need a Stopping Condition

Spring animations simulate real physics, Instead of moving linearly from point A to point B, they behave like a spring:

  • They move toward the target
  • Overshoot (depending on damping)
  • Oscillate
  • Gradually settle

From a math perspective, they never fully stop — they just get closer over time.

In UI, that’s not useful. We need a clear stopping point. Compose handles this using a threshold.

What is visibilityThreshold?

visibilityThreshold defines the minimum difference between the current animated value and the target value at which the animation is considered finished.

In simple terms:

  • If the remaining distance is smaller than the threshold → animation stops
  • If not → animation continues

How It Works for Different Types

Different types need different thresholds because “closeness” depends on the unit.

1. Dp (Density-independent pixels)

Kotlin
spring(
    dampingRatio = Spring.DampingRatioLowBouncy,
    stiffness = Spring.StiffnessMedium,
    visibilityThreshold = 0.5.dp
)

The animation stops when it’s within 0.5dp of the target.

Why this works:

  • Human eyes can’t distinguish sub-pixel differences at that scale
  • Prevents unnecessary micro-adjustments

2. Float (e.g., Alpha, Progress)

Kotlin
spring<Float>(
    dampingRatio = Spring.DampingRatioNoBouncy,
    stiffness = Spring.StiffnessMediumLow,
    visibilityThreshold = 0.01f
)

Stops when difference < 0.01

Why this matters:

  • Floats are continuous values
  • Without a threshold, fade animations might keep updating unnecessarily

3. Offset (Position)

Kotlin
spring(
    dampingRatio = Spring.DampingRatioMediumBouncy,
    stiffness = Spring.StiffnessHigh,
    visibilityThreshold = Offset(1f, 1f)
)

Stops when both X and Y are within 1 pixel

Why:

  • Movement smaller than 1px is visually irrelevant
  • Prevents jitter at the end of motion

Quick Refresher on Spring Parameters

Real Examples with Composables

Example 1: Animating Size (Dp)

Kotlin
@Composable
fun SpringDpExample() {
    var expanded by remember { mutableStateOf(false) }

    val size by animateDpAsState(
        targetValue = if (expanded) 200.dp else 100.dp,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioLowBouncy,
            stiffness = Spring.StiffnessMedium,
            visibilityThreshold = 0.5.dp
        )
    )

    Box(
        modifier = Modifier
            .size(size)
            .background(Color.Blue)
            .clickable { expanded = !expanded }
    )
}

@Preview(showBackground = true)
@Composable
fun SpringDpExamplePreview() {
    CenteredPreview {
        SpringDpExample()
    }
}

What to notice:

  • Smooth expansion and contraction
  • A slight bounce effect
  • Clean stop without jitter

Example 2: Animating Alpha (Float)

Kotlin
@Composable
fun SpringFloatExample() {
    var visible by remember { mutableStateOf(true) }

    val alpha by animateFloatAsState(
        targetValue = if (visible) 1f else 0f,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioNoBouncy,
            stiffness = Spring.StiffnessMediumLow,
            visibilityThreshold = 0.01f
        )
    )

    Box(
        modifier = Modifier
            .size(150.dp)
            .background(Color.Red.copy(alpha = alpha))
            .clickable { visible = !visible }
    )
}

@Preview(showBackground = true)
@Composable
fun SpringFloatExamplePreview() {
    CenteredPreview {
        SpringFloatExample()
    }
}

Observation:

  • Clean fade without bounce
  • No flickering at the end
  • Efficient termination

Example 3: Animating Position (Offset)

Kotlin
@Composable
fun SpringOffsetExample() {
    var moved by remember { mutableStateOf(false) }

    val offset by animateOffsetAsState(
        targetValue = if (moved) Offset(300f, 300f) else Offset.Zero,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioMediumBouncy,
            stiffness = Spring.StiffnessHigh,
            visibilityThreshold = Offset(1f, 1f)
        )
    )

    Box(
        modifier = Modifier
            .offset { IntOffset(offset.x.toInt(), offset.y.toInt()) }
            .size(80.dp)
            .background(Color.Green)
            .clickable { moved = !moved }
    )
}

@Preview(showBackground = true)
@Composable
fun SpringOffsetExamplePreview() {
    CenteredPreview {
        SpringOffsetExample()
    }
}

What to watch:

  • Movement feels natural
  • Slight bounce at destination
  • Stops cleanly without micro-shakes

Why visibilityThreshold is Important 

1. Performance Optimization

Without it:

  • The animation engine keeps recalculating tiny differences
  • Wastes CPU cycles
  • Impacts battery (especially on low-end devices)

With it:

  • Animation ends decisively
  • Reduces recompositions

2. Visual Stability

Tiny sub-pixel movements can cause:

  • Flickering
  • Jitter
  • Unstable UI feel

Threshold eliminates those artifacts.

3. Better UX

Users don’t perceive microscopic differences.

Ending animations early:

  • Feels faster
  • Feels smoother
  • Improves perceived performance

Choosing the Right Threshold

Here’s a practical guideline:

Avoid:

  • Too small → wasted computation
  • Too large → noticeable snapping

Common Mistakes

Ignoring it completely
→ Animations run longer than needed

Using the same value everywhere
→ Different types need different precision

Setting it too high
→ Animation ends too early and looks abrupt

Conclusion

visibilityThreshold is easy to overlook, but it has a noticeable impact on how polished your animations feel.

A small tweak here can:

  • Reduce unnecessary work
  • Improve smoothness
  • Make animations feel more intentional

It’s one of those details that separates a working UI from a well-crafted one.

animateAsState

Master animate*AsState in Jetpack Compose: Effortless UI Animations Explained

Animations in the old View system were a lot of ceremony. You’d set up an ObjectAnimator, attach a listener, call start(), remember to cancel on detach, and hope nothing leaked. For something as simple as fading a view, it felt disproportionate.

Compose takes a different approach. Instead of imperative animation commands, you describe what you want the UI to look like for a given state — and the animate*AsState family handles the transition automatically. No start/cancel lifecycle. No listeners unless you need them.

What Is animate*AsState?

animate*AsState is a group of composable functions that smoothly animate a value whenever its target changes. Feed it a target, and it produces a frame-by-frame animated value you can plug directly into your UI.

The * is a wildcard — there’s a variant for each value type you’re likely to animate:

They all follow the same pattern, so once you’ve used one, the others are trivial.

The Mental Model

The key shift from the View system: you don’t start animations. You change state.

State changes → animate*AsState detects the new target → interpolates toward it each frame

When isExpanded flips from false to true, you don’t tell anything to animate. You just update state, and the animated value catches up on its own. If the state changes again mid-flight, the animation redirects smoothly from wherever it currently is.

This is different from ValueAnimator, which needs explicit start/cancel calls and doesn’t know about your UI state at all.

animateFloatAsState: Fading a Box

Start here — it’s the simplest case.

Kotlin
@Composable
fun FadeExample() {
    var isVisible by remember { mutableStateOf(true) }

    val alpha by animateFloatAsState(
        targetValue = if (isVisible) 1f else 0f,
        label = "alpha animation"
    )

    Column(
        horizontalAlignment = Alignment.CenterHorizontally,
        modifier = Modifier.padding(24.dp)
    ) {
        Box(
            modifier = Modifier
                .size(100.dp)
                .graphicsLayer { this.alpha = alpha }
                .background(Color(0xFF6650A4))
        )
        Spacer(modifier = Modifier.height(16.dp))
        Button(onClick = { isVisible = !isVisible }) {
            Text(if (isVisible) "Hide" else "Show")
        }
    }
}


@Preview(showBackground = true)
@Composable
fun FadeExamplePreview() {
    FadeExample()
}

isVisible is boolean. When it toggles, animateFloatAsState picks up the new target and eases alpha toward it over several frames. Each frame triggers a recomposition, which re-reads the updated alpha — that’s the full animation loop.

The label parameter is optional, but set it anyway. It appears in the Android Studio Animation Inspector and makes debugging significantly less painful.

animateColorAsState: Transitioning Colors

Color is one of the more visually rewarding things to animate because even a 300ms cross-fade reads as deliberate and polished.

Kotlin
@Composable
fun ColorToggleExample() {
    var isActive by remember { mutableStateOf(false) }

    val backgroundColor by animateColorAsState(
        targetValue = if (isActive) Color(0xFF6650A4) else Color(0xFFECECEC),
        animationSpec = tween(durationMillis = 500),
        label = "background color"
    )

    val textColor by animateColorAsState(
        targetValue = if (isActive) Color.White else Color.Black,
        animationSpec = tween(durationMillis = 500),
        label = "text color"
    )

    Box(
        contentAlignment = Alignment.Center,
        modifier = Modifier
            .fillMaxWidth()
            .height(120.dp)
            .background(backgroundColor, shape = RoundedCornerShape(16.dp))
            .clickable { isActive = !isActive }
    ) {
        Text(
            text = if (isActive) "Active" else "Inactive",
            color = textColor,
            fontSize = 20.sp,
            fontWeight = FontWeight.SemiBold
        )
    }
}


@Preview(showBackground = true)
@Composable
fun ColorToggleExamplePreview() {
    CenteredPreview {
        ColorToggleExample()
    }
}

Both the background and text colors animate in sync. You don’t coordinate them — they just share the same state source (isActive), so they naturally stay in step.

The animationSpec = tween(durationMillis = 500) is where you control how the animation plays out. More on that below.

animateDpAsState: Expandable Cards

animateDpAsState works on any Dp value — height, width, padding, corner radius. A common use case is an expandable card:

Kotlin
@Composable
fun ExpandableCardExample() {
    var isExpanded by remember { mutableStateOf(false) }

    val cardHeight by animateDpAsState(
        targetValue = if (isExpanded) 200.dp else 80.dp,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioMediumBouncy,
            stiffness = Spring.StiffnessLow
        ),
        label = "card height"
    )
    
    Card(
        modifier = Modifier
            .fillMaxWidth()
            .height(cardHeight)
            .clickable { isExpanded = !isExpanded },
        elevation = CardDefaults.cardElevation(defaultElevation = 4.dp)
    ) {
        Column(modifier = Modifier.padding(16.dp)) {
            Text(
                text = "Tap to ${if (isExpanded) "collapse" else "expand"}",
                fontWeight = FontWeight.Bold,
                fontSize = 16.sp
            )
            if (isExpanded) {
                Spacer(modifier = Modifier.height(12.dp))
                Text(
                    text = "The card height is driven by animateDpAsState. " +
                           "The spring spec adds a slight overshoot on open.",
                    fontSize = 14.sp,
                    color = Color.Gray
                )
            }
        }
    }
}

Using spring instead of tween here adds a small overshoot when the card opens — the physics-based easing makes it feel more physical than a plain duration curve.

Animation Specs

animationSpec controls the character of the animation. There are three you’ll reach for regularly.

tween — Fixed Duration

Kotlin
val size by animateDpAsState(
    targetValue = targetSize,
    animationSpec = tween(
        durationMillis = 400,
        delayMillis = 100,
        easing = FastOutSlowInEasing
    ),
    label = "size"
)

Pick tween when you need precise timing — UI tests, coordinated sequences, or matching a transition to an audio cue.

Common easing options:

  • FastOutSlowInEasing — decelerates into the final position (good for elements entering the screen)
  • LinearOutSlowInEasing — starts at constant speed, slows at the end (good for exits)
  • FastOutLinearInEasing — accelerates throughout (for emphasis)
  • EaseInOut — smooth on both ends, feels the most natural
  • LinearEasing — constant speed; fine for loaders, rarely right for UI transitions

spring — Physics-Based

Kotlin
val offsetX by animateFloatAsState(
    targetValue = targetPosition,
    animationSpec = spring(
        dampingRatio = Spring.DampingRatioMediumBouncy,  // How much it overshoots
        stiffness = Spring.StiffnessLow                  // How fast it moves
    ),
    label = "offset"
)

spring doesn’t have a fixed duration — it settles based on physics. The two parameters to tune:

Damping ratio (controls overshoot):

  • NoBouncy (1f) — glides in cleanly, no overshoot
  • LowBouncy (0.75f) — barely noticeable bounce
  • MediumBouncy (0.5f) — clear bounce, works well for cards and buttons
  • HighBouncy (0.2f) — exaggerated overshoot, use it deliberately

Stiffness (controls speed):

  • VeryLow— slow, floaty
  • Low — relaxed
  • Medium — balanced default
  • High — snappy
  • VeryHigh — nearly instant

keyframes — Custom Intermediate Values

Kotlin
val scale by animateFloatAsState(
    targetValue = if (isPressed) 0.9f else 1f,
    animationSpec = keyframes {
        durationMillis = 300
        1f at 0       // frame 0ms
        1.1f at 100   // slightly overshoots at 100ms
        0.95f at 200  // settles low
        1f at 300     // lands at resting scale
    },
    label = "press scale"
)

Use keyframes for custom press effects or anything where you need control over intermediate values. It’s more verbose, but it gives you exact control over the curve at each timestamp.

Combining Multiple Animations: Like Button

Each animate*AsState call handles exactly one value. When you need several properties to animate at once, you just stack them. They all read from the same state and run concurrently without any coordination code.

Kotlin
@Composable
fun AnimatedLikeButton() {
    var isLiked by remember { mutableStateOf(false) }

    val scale by animateFloatAsState(
        targetValue = if (isLiked) 1f else 0.85f,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioHighBouncy,
            stiffness = Spring.StiffnessMedium
        ),
        label = "like scale"
    )

    val heartColor by animateColorAsState(
        targetValue = if (isLiked) Color(0xFFE91E63) else Color.Gray,
        animationSpec = tween(durationMillis = 200),
        label = "heart color"
    )

    val iconSize by animateDpAsState(
        targetValue = if (isLiked) 36.dp else 28.dp,
        animationSpec = spring(dampingRatio = Spring.DampingRatioMediumBouncy),
        label = "icon size"
    )

    IconButton(
        onClick = { isLiked = !isLiked },
        modifier = Modifier.graphicsLayer { scaleX = scale; scaleY = scale }
    ) {
        Icon(
            imageVector = if (isLiked) Icons.Filled.Favorite else Icons.Outlined.FavoriteBorder,
            contentDescription = if (isLiked) "Unlike" else "Like",
            tint = heartColor,
            modifier = Modifier.size(iconSize)
        )
    }
}

On tap:

  1. The icon swaps (state change, instant)
  2. Color fades from gray to pink over 200ms (tween)
  3. Scale bounces with a spring (HighBouncy)
  4. Icon size bumps up with a softer spring (MediumBouncy)

Three independent animations, one state variable, no coordinator.

Rotation

The .rotate() modifier accepts a Float, so animateFloatAsState drops right in. Useful for expand/collapse arrows and spinners.

Kotlin
@Composable
fun RotatingArrow() {
    var isExpanded by remember { mutableStateOf(false) }

    val rotation by animateFloatAsState(
        targetValue = if (isExpanded) 180f else 0f,
        animationSpec = tween(durationMillis = 300, easing = EaseInOut),
        label = "arrow rotation"
    )

    Row(
        modifier = Modifier
            .fillMaxWidth()
            .clickable { isExpanded = !isExpanded }
            .padding(16.dp),
        horizontalArrangement = Arrangement.SpaceBetween,
        verticalAlignment = Alignment.CenterVertically
    ) {
        Text("Show details", fontWeight = FontWeight.Medium)
        Icon(
            imageVector = Icons.Default.KeyboardArrowDown,
            contentDescription = "Expand",
            modifier = Modifier.rotate(rotation)
        )
    }
}

Slide-In with Offset

Banner notifications, bottom bars, toast-style messages — offset animation handles all of these. Start the composable off-screen and animate it into position.

Kotlin
@Composable
fun SlideInNotification(
    message: String,
    isVisible: Boolean
) {
    val offsetY by animateDpAsState(
        targetValue = if (isVisible) 0.dp else (-80).dp,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioLowBouncy,
            stiffness = Spring.StiffnessMedium
        ),
        label = "notification offset"
    )

    Surface(
        modifier = Modifier
            .fillMaxWidth()
            .offset(y = offsetY),
        color = Color(0xFF4CAF50),
        shadowElevation = 6.dp,
        shape = RoundedCornerShape(bottomStart = 12.dp, bottomEnd = 12.dp)
    ) {
        Text(
            text = message,
            modifier = Modifier.padding(16.dp),
            color = Color.White,
            fontWeight = FontWeight.Medium
        )
    }
}

@Preview(showBackground = true)
@Composable
fun SlideInNotificationInteractivePreview() {
    CenteredPreview {
        var isVisible by remember { mutableStateOf(false) }

        Column(horizontalAlignment = Alignment.CenterHorizontally) {
            SlideInNotification(
                message = "This is a notification. Hello from animation",
                isVisible = isVisible
            )

            Spacer(modifier = Modifier.height(16.dp))

            Button(onClick = { isVisible = !isVisible }) {
                Text("Toggle")
            }
        }
    }
}

When isVisible becomes true, the banner animates from its current offset to 0.dp, sliding down into view with a slight bounce. When set to false, it animates back to -80.dp, sliding out upward.

Reacting When an Animation Finishes

If you need to trigger something after the animation settles — navigate, update a flag, kick off the next step — use finishedListener:

Kotlin
@Composable
fun AnimationWithCallback() {
    var isMoved by remember { mutableStateOf(false) }
    var statusText by remember { mutableStateOf("Ready") }

    val offsetX by animateDpAsState(
        targetValue = if (isMoved) 200.dp else 0.dp,
        animationSpec = tween(durationMillis = 600),
        label = "move animation",
        finishedListener = { finalValue ->
            // Called once - when the animation fully settles
            statusText = if (finalValue == 200.dp) "Moved!" else "Back home!"
        }
    )

    Column(horizontalAlignment = Alignment.CenterHorizontally) {
        Box(
            modifier = Modifier
                .size(60.dp)
                .offset(x = offsetX)
                .background(Color(0xFF6650A4), shape = CircleShape)
        )
        Spacer(modifier = Modifier.height(16.dp))
        Text(text = statusText)
        Spacer(modifier = Modifier.height(8.dp))
        Button(onClick = { isMoved = !isMoved }) {
            Text(if (isMoved) "Move back" else "Move right")
        }
    }
}

finishedListener fires once, with the final settled value. It does not fire on every frame — that’s what makes it safe to use for side effects.

Performance: Use graphicsLayer for Visual Transforms

For scale, alpha, and rotation, avoid stacking individual modifiers. Batch them in a single graphicsLayer block:

Kotlin
// Prefer this — all transforms applied in one pass on the render thread
Modifier.graphicsLayer {
    scaleX = scale
    scaleY = scale
    alpha = alpha
    rotationZ = rotation
}

// Avoid this for pure visual properties
Modifier
    .scale(scale)
    .alpha(alpha)
    .rotate(rotation)

graphicsLayer applies visual transformations during the draw phase, avoiding layout changes and reducing the cost of recomposition for purely visual updates. This makes it especially efficient for animations like alpha, translation, and scale—particularly in lists or frequently updated UI.

Keep targetValue Simple

If the logic for computing your target value is complex, extract it before passing it in:

Kotlin
// Fine
val scale by animateFloatAsState(
    targetValue = if (isExpanded) 1.2f else 1f,
    label = "scale"
)

// Better to extract first than inline a big when block
val targetScale = when {
    isExpanded && isSelected -> 1.3f
    isExpanded -> 1.15f
    isSelected -> 1.05f
    else -> 1f
}

val scale by animateFloatAsState(targetValue = targetScale, label = "scale")

When Not to Use animate*AsState

animate*AsState is the right tool when you’re animating a single value in response to a state flip. Reach for something else when:

  • You’re animating multiple values that need to stay in sync as a unitupdateTransition
  • You need an infinitely repeating animation → rememberInfiniteTransition
  • You’re tracking a pointer/drag gesture and need manual control → Animatable
  • The composable is entering or leaving the compositionAnimatedVisibility

The last one trips people up most often. animate*AsState can only animate a composable that’s already in the tree. If you’re using if (condition) { MyComposable() } and condition becomes false, MyComposable is gone — there’s nothing left to animate. Wrap it in AnimatedVisibility instead.

Full Example: Custom Animated Toggle

Here’s everything from this article working together in a single component — a toggle row with animated track color, thumb position, and a subtle press scale:

Kotlin
@Composable
fun AnimatedSettingsToggle(
    label: String,
    description: String,
    isEnabled: Boolean,
    onToggle: (Boolean) -> Unit
) {
    var isPressed by remember { mutableStateOf(false) }

    val trackColor by animateColorAsState(
        targetValue = if (isEnabled) Color(0xFF6650A4) else Color(0xFFCAC4D0),
        animationSpec = tween(durationMillis = 250),
        label = "track color"
    )

    val thumbOffset by animateDpAsState(
        targetValue = if (isEnabled) 20.dp else 2.dp,
        animationSpec = spring(
            dampingRatio = Spring.DampingRatioMediumBouncy,
            stiffness = Spring.StiffnessMedium
        ),
        label = "thumb offset"
    )

    val rowScale by animateFloatAsState(
        targetValue = if (isPressed) 0.98f else 1f,
        animationSpec = tween(durationMillis = 100),
        label = "row press scale"
    )

    Row(
        modifier = Modifier
            .fillMaxWidth()
            .graphicsLayer {
                scaleX = rowScale
                scaleY = rowScale
            }
            .clickable(
                indication = null,
                interactionSource = remember { MutableInteractionSource() }
            ) {
                isPressed = true
                onToggle(!isEnabled) // toggle value
                isPressed = false
            }
            .padding(horizontal = 16.dp, vertical = 12.dp),
        horizontalArrangement = Arrangement.SpaceBetween,
        verticalAlignment = Alignment.CenterVertically
    ) {
        Column(modifier = Modifier.weight(1f)) {
            Text(
                text = label,
                fontWeight = FontWeight.SemiBold,
                fontSize = 16.sp
            )
            Text(
                text = description,
                fontSize = 13.sp,
                color = Color.Gray
            )
        }

        Spacer(modifier = Modifier.width(16.dp))

        Box(
            modifier = Modifier
                .width(48.dp)
                .height(28.dp)
                .background(trackColor, shape = RoundedCornerShape(50))
                .padding(horizontal = 2.dp),
            contentAlignment = Alignment.CenterStart
        ) {
            Box(
                modifier = Modifier
                    .size(24.dp)
                    .offset(x = thumbOffset)
                    .background(Color.White, shape = CircleShape)
            )
        }
    }
}

@Preview(showBackground = true)
@Composable
fun AnimatedSettingsTogglePreview() {
    var isEnabled by remember { mutableStateOf(false) }

    CenteredPreview {
        AnimatedSettingsToggle(
            label = "Wi-Fi",
            description = "Enable wireless connectivity",
            isEnabled = isEnabled,
            onToggle = { }
        )
    }
}

Three animate*AsState calls, no third-party library, no animation framework — just state and a handful of composable functions.

Common Mistakes

Animating inside a LazyColumn without stable keys. Each item gets its own animation instance, which is correct — but if your remember isn’t keyed to the item’s identity, Compose may reuse the state for a different item when the list scrolls. Always key your remember calls to something stable and unique per item.

Expecting finishedListener to fire on every frame. It fires once, when the animation settles. If you want per-frame callbacks, you need Animatable with a custom coroutine loop.

Using animate*AsState for enter/exit animations. When a composable leaves the composition, it’s gone — animate*AsState has nothing to animate. Use AnimatedVisibility for any case where the composable needs to animate out before being removed.

Conclusion

animate*AsState in Jetpack Compose is one of those APIs you end up using all the time.

It keeps animation logic simple and close to your UI state, which is exactly how Compose is meant to work.

Start with small interactions. Once you get comfortable, you’ll naturally move to more advanced animation APIs when needed.

@Preview

Stop Copy-Pasting @Preview Functions. Here’s What Real Developers Do Instead

If you’ve been building UI with Jetpack Compose for more than a week, you already know the drill. You write a clean Composable — maybe a PrimaryButton, or a UserProfileCard — and then you immediately have to write something like this below it:

Kotlin
@Preview(showBackground = true)
@Composable
fun PrimaryButtonPreview() {
    PrimaryButton(
        text = "Click Me",
        onClick = {}
    )
}

Doesn’t look like much, right..? Nine lines. 

But think about how many Composables you build across a full project. Ten screens, each with five or six components — that’s fifty-plus preview functions you’re hand-typing (or copy-pasting and then refactoring). The mental cost isn’t the typing itself; it’s the context switching. You just nailed your component logic, and now you have to stop, copy the function name, jump below, paste, rename, and restructure. It breaks the creative flow completely.

The good news: Android Studio already has a built-in system designed exactly for this. You just need to set it up once.

Quick answer for the impatient: Go to Settings → Editor → Live Templates, create a new template with the abbreviation prev (or use the one already available in Android Studio), and paste the template code from the next section. That’s it. The rest of this post goes deeper into why and when to use each approach.

Live Templates — The Fastest Fix You’re Not Using

Live Templates are one of Android Studio’s most underused features. They’re essentially smart text snippets that expand when you type a short abbreviation and press Tab. IntelliJ has had them for years — Android Studio inherits them from the same codebase. Kotlin developers who come from other editors sometimes have no idea this exists.

For @Preview specifically, the goal is: you type prev, hit Tab, and the entire preview scaffold appears with your cursor already positioned inside it, ready to fill in any needed parameters.

Setting Up Your First Preview Live Template

Here’s the exact step-by-step process — no skipping ahead:

1. Open Settings

On macOS press ⌘ ,

On Windows/Linux go to File → Settings

In the search bar, type “Live Templates” to jump straight to it.

2. Create a new Template Group 

In the Live Templates panel, click the + button on the right → select Template Group→ name it something like Compose. This keeps things organized and separate from Android Studio’s built-in templates.

3. Add a new Live Template

With your new Compose group selected, click + again → this time select Live Template.

4. Fill in the abbreviation and description

Set Abbreviation to prev and give it a Description like “Compose @Preview function”. The description shows up in autocomplete hints, so keep it readable.

5. Paste the template text

This is the core part. Paste the code below exactly as shown — the $VARIABLE$ syntax is how Android Studio knows where to place your cursor and what to ask you to fill in.

Template Code — Basic Preview

Live Template Text

Kotlin
@Preview(showBackground = true)
@Composable
fun $COMPOSABLE_NAME$Preview() {
    $COMPOSABLE_NAME$($END$)
}

The $COMPOSABLE_NAME$ variable is smart — when you tab into the template, Android Studio highlights every occurrence at once. Type the name once and it fills in both places simultaneously (the function name and the call inside it). The $END$ marker tells the editor where to park your cursor after you’re done naming — right inside the parentheses where you’d add parameters.

If you want, you can wrap it in your existing app theme or a Material theme like this:

Kotlin
@Preview(showBackground = true)
@Composable
fun $COMPOSABLE_NAME$Preview() {
    MaterialTheme {
      $COMPOSABLE_NAME$($END$)
    }
}

6. Define the applicable context

At the bottom of the template editor, click Define and check Kotlin. Without this step, the template won’t activate inside .kt files.

7. Hit OK and test it

Open any Kotlin file, type prev, and press Tab. The scaffold should appear.

What It Looks Like In Practice

Before & After

Kotlin
// You write this composable first:
@Composable
fun UserProfileCard(
    name: String,
    avatarUrl: String,
    isOnline: Boolean
) {
    // ... your component logic
}




// Then type "prev" + Tab, type "UserProfileCard", Tab again:

@Preview(showBackground = true)
@Composable
fun UserProfileCardPreview() {
    UserProfileCard(
        // ← cursor lands here, ready for sample data
    )
}

One thing to remember: The template inserts the composable name as a plain text call. If your Composable has required parameters, Android Studio won’t auto-fill them — you’ll need to add sample data yourself. That’s expected and by design; previews should use meaningful placeholder values, not auto-generated garbage.

Multi-Variant Previews: Light, Dark, and Different Screen Sizes

A single light-mode preview is fine for early development. But before you ship anything, you want to see your component in at least two states: light theme and dark theme. You might also want to check it on a compact phone screen vs a larger device. Doing this manually every time is even more tedious than writing a basic preview.

There are two solid ways to handle this in Compose. The cleaner of the two is a custom annotation that stacks multiple @Preview declarations — this keeps your composable files lean and consistent across your entire project.

Approach A: Custom Multi-Preview Annotation

Create a single annotation class in a shared file (something like PreviewAnnotations.kt in your UI module’s utils package):

PreviewAnnotations.kt

Kotlin
import android.content.res.Configuration
import androidx.compose.ui.tooling.preview.Preview

@Preview(
    name = "Light Mode",
    showBackground = true
)
@Preview(
    name = "Dark Mode",
    showBackground = true,
    uiMode = Configuration.UI_MODE_NIGHT_YES
)
@Preview(
    name = "Small Phone",
    showBackground = true,
    widthDp = 320
)
@Retention(AnnotationRetention.BINARY)
@Target(AnnotationTarget.ANNOTATION_CLASS, AnnotationTarget.FUNCTION)
@Preview
annotation class DevicePreviews

Now your preview functions become genuinely compact. One annotation, three rendered variants:

Usage

Kotlin
@DevicePreviews
@Composable
fun UserProfileCardPreview() {
    YourAppTheme {
        UserProfileCard(
            name = "Priya Rao",
            avatarUrl = "https://softaai.com/avatar.jpg",
            isOnline = true
        )
    }
}

Approach B: Multi-Variant Live Template

If you prefer keeping everything in Live Templates instead of a shared annotation file, create a second template with abbreviation prevmulti:

Live Template Text — prevmulti

Kotlin
@Preview(name = "Light", showBackground = true)
@Preview(
    name = "Dark",
    showBackground = true,
    uiMode = Configuration.UI_MODE_NIGHT_YES
)
@Composable
fun $COMPOSABLE_NAME$Preview() {
    $APP_THEME$ {
        $COMPOSABLE_NAME$($END$)
    }
}

Between these two approaches, the custom annotation (Approach A) is better for teams and larger projects. It lives in source control, everyone uses the same preview config automatically, and updating it once updates every preview across the codebase. Live Templates are per-developer and per-machine — great for solo work, less ideal for shared codebases.

Note: Instead of defining custom @DevicePreviews like above, you can use the built-in@PreviewScreenSizes.

File Templates: Scaffold Both the Composable and Preview at Once

Live Templates solve the in-file boilerplate problem. But what if your workflow always starts with creating a new file? If you find yourself doing File → New → Kotlin File/Class and then manually typing the @Composable and @Preview blocks from scratch, File Templates take this even further.

A File Template is a pre-defined structure that Android Studio uses when you create a new file through the right-click menu. You can define your own and make “New Composable File” a real option.

1. Open Settings → File and Code Templates

Navigate to Editor → File and Code Templates. You’ll see the default list of templates on the left (Kotlin File, Interface, Class, etc.).

2. Click + to create a new template

Name it something like Composable, set the extension to kt.

3. Paste the template body

Use the $NAME variable — Android Studio prompts the user to fill this in when they create the file.

File Template Body

File Template — Composable.kt

Kotlin
#if (${PACKAGE_NAME} && ${PACKAGE_NAME} != "")package ${PACKAGE_NAME}
#end

import androidx.compose.material3.MaterialTheme
import androidx.compose.runtime.Composable
import androidx.compose.ui.tooling.preview.Preview

@Composable
fun ${NAME}() {

}

@Preview(showBackground = true)
@Composable
fun ${NAME}Preview() {
    MaterialTheme {
       ${NAME}()
    }
}

After saving this, right-clicking any package in your project tree will show New → Composable in the menu. You type the component name once, and you get a file with proper package declaration, imports, a blank composable, and its preview — all ready to go.

Which Approach Should You Actually Use?

The honest answer: it depends on your context. Here’s a clear breakdown to help you decide without overthinking it.

For most individual developers: start with a Live Template. It takes five minutes, pays off immediately, and you don’t need to touch it again. If you’re working on a team or a long-lived codebase, invest the extra ten minutes to set up a custom @DevicePreviews annotation and commit it to the repo. That way the entire team benefits without any individual setup.

Conclusion

The friction of writing @Preview boilerplate is real, but it’s entirely self-imposed. Android Studio has the tools to make this near-instant — you just need to spend fifteen minutes setting them up once. A Live Template handles the basic case in two keystrokes. A custom annotation handles multi-variant previews for teams. File Templates handle the new-file workflow.

Pick the one that fits your current workflow and set it up today. The next time you build a Composable, you’ll feel the difference immediately.

Jetpack Compose Animation System

Jetpack Compose Animation System Explained: A Beginner Guide

Animations are one of those things that feel easy until you actually try to wire them into a real screen. You start with a simple fade or size change, and suddenly you’re juggling state, re-composition, and timing issues that don’t behave the way you expected.

I ran into this while building a product listing screen. Small interactions like expanding cards, animating filters, and handling loading states quickly became messy. That’s when the Jetpack Compose Animation System started to make sense — not as a set of APIs, but as a model tied directly to state.

This post breaks that down in a practical way.

What the Jetpack Compose Animation System Actually Is

The core idea is straightforward:

Your UI depends on state, and animations happen when that state changes.

You don’t trigger animations manually. You describe what the UI should look like for a given state, and Compose handles the transition.

Instead of writing something like “start animation on click”, you write:

  • if expanded → height = 200dp
  • if collapsed → height = 100dp

When the state changes, Compose animates between those values.

Understanding this mental model will make everything else click. In Compose, your UI is a function of state:

UI = f(state) — When state changes, Compose re-renders the UI. Animations are just a smooth interpolation between two states over time. You don’t “run” an animation — you change state and tell Compose how to animate the transition.

The animation system in Compose has three layers, and it’s worth knowing which layer you’re working at:

Layer 1 — High-level APIs: AnimatedVisibility, AnimatedContent, Crossfade. These handle the most common cases with zero configuration needed.

Layer 2 — Value-based APIs: animate*AsState, updateTransition, InfiniteTransition. These animate specific values (Float, Dp, Color, etc.) that you then apply in your composables.

Layer 3 — Low-level APIs: Animatable, coroutine-based. Full manual control for complex sequencing, interruptions, or physics-based motion.

The golden rule: start at the highest level that solves your problem. Only go deeper when you genuinely need more control. Most production animations live happily in layers 1 and 2.

The Core Building Blocks

Before writing any animations, it helps to understand the main APIs you’ll actually use:

1. animate*AsState

For simple, one-off animations tied to a single value.

2. updateTransition

For animating multiple values based on the same state.

3. AnimatedVisibility

For showing and hiding composables with animation.

4. AnimatedContent

For switching between UI states.

5. rememberInfiniteTransition

For looping animations.

You don’t need all of them at once. Most real screens use 1–2 of these consistently.

Why This Model Works Well

Once you lean into this approach, a few things improve right away:

  • No need to manage animation lifecycle
  • No manual cancellation logic
  • UI stays consistent with state
  • Less glue code

This becomes especially useful when multiple properties change together. You don’t coordinate them manually. You just describe the end result.

Your First Real Animation

Let’s build something practical: a card that expands when clicked.

Step 1: Define State

Kotlin
@Composable
fun ExpandableCard() {
    var expanded by remember { mutableStateOf(false) }

This is the trigger. Everything depends on this boolean.

Step 2: Animate a Value

Kotlin
val height by animateDpAsState(
        targetValue = if (expanded) 200.dp else 100.dp,
        label = "cardHeight"
    )

What’s happening here:

  • targetValue changes when expanded changes
  • Compose animates between old and new values
  • The result (height) updates continuously during animation

You don’t write animation logic. You describe the end state.

Step 3: Apply It to UI

Kotlin
Card(
        modifier = Modifier
            .fillMaxWidth()
            .height(height)
            .clickable { expanded = !expanded }
    ) {
        Text(
            text = if (expanded) "Expanded content" else "Collapsed",
            modifier = Modifier.padding(16.dp)
        )
    }
}

That’s it. No animator objects, no listeners.

Full Code with Working Preview

Kotlin
@Composable
fun ExpandableCard() {
    var expanded by remember { mutableStateOf(false) }

    val height by animateDpAsState(
        targetValue = if (expanded) 200.dp else 100.dp,
        label = "cardHeight"
    )
    Card(
        modifier = Modifier
            .fillMaxWidth()
            .height(height)
            .clickable { expanded = !expanded }
    ) {
        Text(
            text = if (expanded) "Expanded content" else "Collapsed",
            modifier = Modifier.padding(16.dp)
        )
    }
}

@Preview(showBackground = true)
@Composable
private fun ExpandableCardPreview() {
    MaterialTheme {
        ExpandableCard1()
    }
}

Controlling Animation Behavior

The default animation works, but real apps need control.

Custom Animation Spec

Kotlin
val height by animateDpAsState(
    targetValue = if (expanded) 200.dp else 100.dp,
    animationSpec = tween(
        durationMillis = 500,
        easing = FastOutSlowInEasing
    ),
    label = "cardHeight"
)

Now you control:

  • Duration
  • Easing curve

Spring Animation

Kotlin
animationSpec = spring(
    dampingRatio = Spring.DampingRatioMediumBouncy,
    stiffness = Spring.StiffnessLow
)

Spring animations feel more natural for things like cards or draggable UI.

Animating Multiple Properties Together

This is where updateTransition becomes useful.

Let’s animate both height and color based on the same state.

Kotlin
val transition = updateTransition(
    targetState = expanded,
    label = "cardTransition"
)

Animate Height

Kotlin
val height by transition.animateDp(
    label = "height"
) { state ->
    if (state) 200.dp else 100.dp
}

Animate Color

Kotlin
val backgroundColor by transition.animateColor(
    label = "color"
) { state ->
    if (state) Color.Blue else Color.Gray
}

Apply to UI

Kotlin
Card(
    modifier = Modifier
        .fillMaxWidth()
        .height(height)
        .clickable { expanded = !expanded },
    colors = CardDefaults.cardColors(containerColor = backgroundColor)
) {
    Text(
        text = "Tap to expand",
        modifier = Modifier.padding(16.dp)
    )
}

Now both properties animate in sync, driven by the same state.

Showing and Hiding Content

For visibility changes, don’t animate alpha manually. Use AnimatedVisibility.

Kotlin
AnimatedVisibility(visible = expanded) {
    Text(
        text = "Extra details shown here",
        modifier = Modifier.padding(16.dp)
    )
}

By default, it fades and expands. You can customize it:

Kotlin
AnimatedVisibility(
    visible = expanded,
    enter = fadeIn() + expandVertically(),
    exit = fadeOut() + shrinkVertically()
) {
    Text("Details")
}

This keeps your intent clear: you’re not animating alpha, you’re controlling visibility.

Switching Between UI States

For replacing content, use AnimatedContent.

Kotlin
AnimatedContent(targetState = expanded, label = "content") { state ->
    if (state) {
        Text("Expanded View")
    } else {
        Text("Collapsed View")
    }
}

This automatically animates between the two layouts.

Infinite Animations

For loaders or subtle UI effects:

Kotlin
val infiniteTransition = rememberInfiniteTransition(label = "pulse")val scale by infiniteTransition.animateFloat(
    initialValue = 1f,
    targetValue = 1.1f,
    animationSpec = infiniteRepeatable(
        animation = tween(800),
        repeatMode = RepeatMode.Reverse
    ),
    label = "scale"
)

Apply it:

Kotlin
Box(
    modifier = Modifier
        .size(100.dp)
        .scale(scale)
        .background(Color.Blue)
)

Good for:

  • Loading indicators
  • Attention hints
  • Micro-interactions

Conclusion

The Jetpack Compose Animation System feels strange at first because it flips the mental model. You’re not telling the UI how to animate. You’re describing how it should look in different states.

Once that clicks, animations become predictable.

Start small:

  • Animate size
  • Then color
  • Then combine them

After a few screens, you’ll stop thinking about “animations” entirely and just think in terms of state transitions.

That’s when Compose starts to feel natural.

A Complete Developer's Guide to Faster Apps

Android 16 KB Page Size: A Complete Developer’s Guide to Faster Apps

Most Android performance improvements land as a framework update or a new API. This one is different. Starting with Android 15, Google added support for a 16 KB page size on ARM64 devices — and with Android 16, it’s becoming a hard requirement for apps that target new hardware.

If you haven’t looked into this yet, now is a good time. Apps that ship 4 KB-aligned native libraries will fail to load on 16 KB page-size devices. The failure isn’t graceful — it’s an UnsatisfiedLinkError and a crash.

This guide covers what the change is, which apps are affected, how to check your own APK, and what to actually do about it.

Memory Pages: A Quick Refresher

The OS doesn’t allocate memory one byte at a time — it works in fixed-size blocks called pages. For decades, Android (like most Linux systems) used a 4 KB page size. That made sense when RAM was limited and apps were simpler.

Modern flagship devices are a different story. They have multiple gigabytes of RAM, 64-bit ARM processors, and apps that load dozens of native libraries at startup. Managing all of that in 4 KB chunks means more page table entries, more TLB pressure, and more overhead on every app launch.

16 KB pages reduce that overhead. The OS manages fewer, larger chunks — fewer page faults at startup, fewer TLB misses during execution, and less kernel bookkeeping overall.

Why Google Made the Change

The performance case is real:

Faster cold starts. Fewer pages need to be mapped during app startup. Google’s benchmarks showed cold launch improvements of up to 30% on devices running a 16 KB page-size kernel.

Better TLB efficiency. The TLB (Translation Lookaside Buffer) is a small hardware cache that maps virtual addresses to physical memory. With 16 KB pages, each TLB entry covers four times more memory, which means fewer misses on cache-heavy operations.

Less kernel overhead. Fewer pages means a smaller page table. The kernel spends less time on memory management and more time running your code.

Industry alignment. Apple has used 16 KB pages on ARM devices for years. The mainline Linux kernel has progressively added support too. Android isn’t ahead of the curve here — it’s catching up.

Where Things Stand in 2026

  • Android 15 introduced 16 KB page size support in the emulator so developers could start testing.
  • Android 16 is expected to require 16 KB compliance for apps targeting API 36 on supported hardware.
  • Pixel 9 and later are expected to ship with kernels configured for 16 KB pages.
  • Play Console already shows warnings for apps that bundle 4 KB-aligned .so files when targeting API 35+.

The install base of 16 KB devices is still small, but it will grow quickly as new flagships ship. Getting ahead of this now is much easier than scrambling when Play starts rejecting updates.

Does This Affect Your App?

It depends entirely on whether your app includes native code.

Pure Kotlin or Java apps

You’re largely fine. The Android Runtime handles .dex alignment automatically, so managed code isn’t affected. The one thing to watch is third-party SDKs — they sometimes bundle native .so files you didn’t write and may not have checked.

Apps with NDK or native libraries

This is where the requirement has real teeth. If your app includes:

  • Native libraries (.so files) built with the NDK
  • Pre-built .so files from third-party SDKs
  • A game engine like Unity or Cocos2d
  • Audio, video, or image processing libraries with native bindings

…then every one of those .so files needs to be compiled with 16 KB-aligned ELF segments. If any aren’t, the OS on a 16 KB device will refuse to load them.

Check Your APK

Before touching any build config, find out where you actually stand.

Use readelf on your .so files

Bash
# Unzip the APK
unzip your-app.apk -d app-contents

# Inspect a native library
readelf -l app-contents/lib/arm64-v8a/libyourlibrary.so | grep LOAD

Look at the alignment column on the right side of each LOAD segment line:

  • 0x4000 = 16384 bytes = 16 KB compliant
  • 0x1000 = 4096 bytes = 4 KB needs recompiling

Compliant output:

LOAD  0x000000 ... 0x001abc 0x001abc R   0x4000
LOAD 0x002000 ... 0x005def 0x005def R E 0x4000

Non-compliant output:

LOAD  0x000000 ... 0x001abc 0x001abc R   0x1000

Do this for every .so in the APK, not just the ones you wrote. Third-party libraries need to pass too.

Run AGP’s built-in lint check

Android Gradle Plugin 8.5+ includes a lint check specifically for this. Run:

./gradlew lint

Look for warnings tagged PageSizeAlignment. They’ll call out each non-compliant library by name.

Fix Your Own Native Libraries

If you maintain native code with the NDK, the fix is a single linker flag.

With CMake

CMake
# CMakeLists.txt

cmake_minimum_required(VERSION 3.22.1)
project(MyNativeLib)

add_library(
    mynativelib
    SHARED
    src/main/cpp/mynativelib.cpp
)

# Tell the linker to align ELF LOAD segments to 16 KB boundaries
target_link_options(mynativelib PRIVATE "-Wl,-z,max-page-size=16384")

find_library(log-lib log)

target_link_libraries(
    mynativelib
    ${log-lib}
)

The flag -Wl,-z,max-page-size=16384 passes max-page-size=16384 directly to the linker. It sets the alignment of every LOAD segment in the output .so to 16 KB. That’s all the change requires on your end.

With ndk-build

CMake
# Android.mk

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

LOCAL_MODULE    := mynativelib
LOCAL_SRC_FILES := mynativelib.cpp

# 16 KB page size alignment
LOCAL_LDFLAGS   := -Wl,-z,max-page-size=16384

include $(BUILD_SHARED_LIBRARY)

After rebuilding, re-run the readelf check to confirm the alignment value changed from 0x1000 to 0x4000.

One thing worth knowing: a 16 KB-aligned .so runs fine on 4 KB devices too. The extra alignment padding is harmless on older hardware. You don’t need separate builds — one .so covers both.

Kotlin: What You Need to Handle

Kotlin doesn’t control ELF alignment, but there are places where Kotlin code loads native libraries and should handle failures gracefully.

Safe native library loading

System.loadLibrary() throws UnsatisfiedLinkError if a .so fails to load — which on a 16 KB device usually means the library isn’t aligned. Without handling this, the app just crashes.

Kotlin
// NativeLibraryLoader.kt

object NativeLibraryLoader {

    private const val TAG = "NativeLibraryLoader"

    /**
     * Loads a native library and returns false (instead of crashing)
     * if it fails. On 16 KB page-size devices, an UnsatisfiedLinkError
     * usually means the .so wasn't compiled with max-page-size=16384.
     */
    fun loadSafely(libraryName: String): Boolean {
        return try {
            System.loadLibrary(libraryName)
            Log.d(TAG, "Loaded: lib$libraryName.so")
            true
        } catch (e: UnsatisfiedLinkError) {
            Log.e(
                TAG,
                "Failed to load lib$libraryName.so — possible 16 KB alignment issue. " +
                "Recompile with: -Wl,-z,max-page-size=16384",
                e
            )
            false
        } catch (e: SecurityException) {
            Log.e(TAG, "Security exception loading lib$libraryName.so", e)
            false
        }
    }
}

Use it in your Activity or Application:

Kotlin
// MainActivity.kt

class MainActivity : AppCompatActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        val loaded = NativeLibraryLoader.loadSafely("mynativelib")

        if (!loaded) {
            showCompatibilityError()
        }
    }

    private fun showCompatibilityError() {
        AlertDialog.Builder(this)
            .setTitle("Compatibility Issue")
            .setMessage(
                "A required component couldn't load on this device. " +
                "Try updating the app to get the latest compatibility fixes."
            )
            .setPositiveButton("OK", null)
            .show()
    }
}

This avoids a crash and gives the user a message they can actually act on, instead of a silent ANR.

Detecting page size at runtime

Sometimes you need to know which page size the device is using — for example, to decide whether to enable a feature backed by a library you haven’t fully audited yet.

Kotlin
// PageSizeDetector.kt

import android.system.Os
import android.system.OsConstants

/**
 * Reads the system page size at runtime using the POSIX sysconf API.
 * Returns 4096 on standard devices, 16384 on 16 KB page-size devices.
 */
object PageSizeDetector {

    fun getPageSizeInBytes(): Long {
        return Os.sysconf(OsConstants._SC_PAGESIZE)
    }

    fun is16KBPageSize(): Boolean {
        return getPageSizeInBytes() == 16384L
    }

    fun description(): String {
        return when (getPageSizeInBytes()) {
            4096L  -> "4 KB"
            16384L -> "16 KB"
            else   -> "${getPageSizeInBytes()} bytes (unknown)"
        }
    }
}

Log it at startup — it takes one line and has saved debugging time more than once when a crash report comes in from an unfamiliar device:

Kotlin
// App.kt

class App : Application() {
    override fun onCreate() {
        super.onCreate()
        Log.i("App", "Page size: ${PageSizeDetector.description()}")
    }
}

When you see a crash log from a device you can’t reproduce locally, the page size entry tells you whether you’re looking at an alignment problem or something else entirely.

Auditing bundled native libraries at debug time

This helper scans your app’s native library directory and lists every .so it finds. It won’t tell you the alignment directly (use readelf for that), but it gives you a complete list to work through — which matters when you’re auditing a project with a lot of dependencies.

Kotlin
// SdkCompatibilityChecker.kt

import java.io.File

/**
 * Lists all native libraries bundled in the APK at runtime.
 * Run this in debug builds to build your audit list.
 * For actual alignment verification, use readelf on each file.
 */
object SdkCompatibilityChecker {

    private const val TAG = "SdkCompatibilityChecker"

    fun findNativeLibraries(context: android.content.Context): List<String> {
        val nativeLibDir = File(context.applicationInfo.nativeLibraryDir)

        if (!nativeLibDir.exists() || !nativeLibDir.isDirectory) {
            Log.w(TAG, "No native library directory found.")
            return emptyList()
        }

        return nativeLibDir
            .listFiles { file -> file.name.endsWith(".so") }
            ?.map { it.name }
            ?: emptyList()
    }

    fun auditAndLog(context: android.content.Context) {
        val libs = findNativeLibraries(context)

        if (libs.isEmpty()) {
            Log.i(TAG, "No native libraries found.")
            return
        }

        Log.w(TAG, "Found ${libs.size} native libraries — verify each with readelf:")
        libs.forEach { Log.w(TAG, "  -> $it") }
    }
}

Wire it into your Application class behind a BuildConfig.DEBUG check:

Kotlin
class App : Application() {

override fun onCreate() {
        super.onCreate()
        if (BuildConfig.DEBUG) {
            SdkCompatibilityChecker.auditAndLog(this)
        }
    }
}

Every debug run now logs a full list of native libraries. Paste it into a spreadsheet, mark which ones you own, and track the audit from there.

Test on a 16 KB Emulator

You don’t need a physical device for this. Android Studio ships with 16 KB emulator images.

Create the emulator

  1. Open Android StudioDevice Manager
  2. Click Create Device
  3. Pick a Pixel 8 or later hardware profile
  4. On the system image screen, select an image labelled “16k page size” (available for API 35 and API 36)
  5. Finish the setup and start the emulator

Confirm it’s configured correctly

adb shell getconf PAGE_SIZE

16384 means you’re on a 16 KB device. 4096 means something went wrong with the AVD setup.

What to watch for when running your app

  • Crash on launch → a native library failed to load; check Logcat for the library name
  • UnsatisfiedLinkError in Logcat → that specific .so is 4 KB aligned
  • App runs normally → you’re compliant

Dealing With Third-Party Libraries You Can’t Recompile

Your code might be clean, but one of your dependencies is shipping a 4 KB-aligned .so that you have no control over.

Option 1 — Contact the vendor. File a GitHub issue or support ticket referencing the Android 16 KB page size requirement. Most major SDKs (Firebase, Google Play Services, Crashlytics) are already compliant. Smaller or older SDKs may need a nudge.

Option 2 — Gate the feature at runtime. While you wait for the vendor to ship a fix, use PageSizeDetector to disable the feature on affected devices:

Kotlin
// FeatureManager.kt

object FeatureManager {

    /**
     * Returns false on 16 KB page-size devices if the underlying
     * native library hasn't been verified as compliant yet.
     * Flip this to true once your SDK vendor ships a fix.
     */
    fun isNativeFeatureEnabled(): Boolean {
        if (PageSizeDetector.is16KBPageSize()) {
            Log.w("FeatureManager", "Skipping native feature on 16 KB device — awaiting SDK update.")
            return false
        }
        return true
    }
}

Option 3 — Write a Kotlin fallback. For features where a fallback is feasible, have two paths: the native implementation for standard devices, and a pure Kotlin path for 16 KB devices until the library is updated.

Kotlin
// ImageProcessor.kt

class ImageProcessor {

    /**
     * Uses the fast native path on verified devices, falls back to
     * Kotlin on 16 KB page-size devices until the native library is updated.
     */
    fun processImage(bitmap: android.graphics.Bitmap): android.graphics.Bitmap {
        return if (FeatureManager.isNativeFeatureEnabled()) {
            processImageNative(bitmap)  // C++ via JNI
        } else {
            processImageKotlin(bitmap)  // Pure Kotlin fallback
        }
    }

    private external fun processImageNative(
        bitmap: android.graphics.Bitmap
    ): android.graphics.Bitmap

    private fun processImageKotlin(
        bitmap: android.graphics.Bitmap
    ): android.graphics.Bitmap {
        val copy = bitmap.copy(bitmap.config, true)
        // apply transformations
        return copy
    }
}

This keeps the app working on all devices. The Kotlin path is slower, but it beats a crash.

Google Play Requirements

Play Console already flags 4 KB-aligned libraries as warnings when you target API 34 or lower. However, for API 35 (Android 15) and above, 16 KB compliance is now mandatory for all new apps and updates. While Google initially allowed extensions, as of 2026, non-compliant apps with native code will face immediate rejection during the upload process.

Check Play Console → Release → App bundle explorer → [Select Version] → Supported page sizes for any warnings or “Not Supported” labels regarding native library alignment. Deal with them immediately to ensure your releases are not blocked.

Real Performance Numbers

The gains are genuine but not uniform across all app types:

Metric4 KB Pages16 KB Pages
Cold app launchBaselineUp to 30% faster
TLB miss rateHigherLower
Kernel page table sizeLargerSmaller
Memory fragmentationMoreLess
App RAM footprintBaselineMarginally higher

The trade-off: small allocations get rounded up to the next 16 KB boundary, so there’s a slight increase in memory usage. For most apps it’s a few hundred KB at most — well worth the startup speed improvement.

Migration Checklist

Kotlin
Android 16 KB Page Size - Pre-ship Checklist
=============================================

□ Unzipped APK and located all .so files under lib/arm64-v8a/
□ Ran readelf -l on each .so - confirmed LOAD alignment is 0x4000
□ Added -Wl,-z,max-page-size=16384 to CMakeLists.txt or Android.mk
□ Rebuilt native libraries - re-verified alignment with readelf
□ Audited all third-party .so files - opened tickets with non-compliant vendors
□ Added NativeLibraryLoader with UnsatisfiedLinkError handling
□ Added PageSizeDetector and logging to Application.onCreate()
□ Added SdkCompatibilityChecker to debug builds
□ Created a 16k page size AVD in Android Studio
□ Ran the app on the 16k emulator - no crashes, no UnsatisfiedLinkError
□ Ran ./gradlew lint - no PageSizeAlignment warnings
□ Checked Play Console - no native library alignment warnings

FAQ

Does this affect all Android devices right now?

No. The 16 KB page size requires specific kernel and hardware support. Older devices will keep using 4 KB pages. But as Pixel 9 and later devices ship with 16 KB kernels, the affected install base will grow steadily.

My app is pure Kotlin with no NDK. Do I need to do anything?

Probably not. ART handles alignment for managed code automatically. Just double-check your Gradle dependencies for any SDKs that bundle .so files — those are the only risk for a pure Kotlin app.

Will a 4 KB-aligned .so actually crash the app?

Yes. On a 16 KB page-size device, System.loadLibrary() will throw UnsatisfiedLinkError if the .so isn’t properly aligned. That’s an app crash unless you catch it.

Can one .so file work on both 4 KB and 16 KB devices?

Yes. A library compiled with -Wl,-z,max-page-size=16384 works fine on 4 KB devices — the extra alignment is just padding that gets ignored. You don’t need separate builds for different page sizes.

What about Unity?

Unity generates native .so files, so yes, it’s affected. Unity has been shipping fixes in recent LTS versions. Make sure you’re on an up-to-date Unity LTS release and rebuild your project after upgrading.

Conclusion

The Android 16 KB page size change is the kind of requirement that’s easy to ignore until it starts causing crashes on new hardware. The fix is straightforward if you own your native code — it’s one linker flag and a rebuild. The harder work is tracking down third-party SDKs that haven’t updated yet and building a plan for those.

Start by running the readelf check on your APK today. If everything comes back as 0x4000, you’re done. If not, the checklist above has every step you need.

Android Emulator Settings

Android Emulator Settings for Speed & Performance: A Practical Guide for Real-World Development

If you’ve worked with Android long enough, you already know this: emulator performance isn’t just about speed, it’s about consistency.

A fast emulator that behaves unpredictably is worse than a slightly slower one that’s stable.

This guide focuses on Android Emulator Settings that hold up in real-world development. Not just for solo projects, but for teams, CI pipelines, and production-grade workflows.

How to Think About Emulator Performance

Before changing settings, it helps to understand what actually impacts emulator performance.

There are three main bottlenecks:

  1. CPU virtualization overhead
  2. Memory pressure (host + emulator)
  3. GPU rendering pipeline

Most “tuning tips” online ignore this and suggest arbitrary numbers. In practice, performance tuning should be constraint-driven, not guesswork.

Core Android Emulator Settings That Make a Difference

Let’s go through the settings that consistently make a difference.

CPU Allocation: Less Is Often More

A common mistake is over-allocating CPU cores.

What works in practice:

  • 2 cores → stable baseline (recommended for most cases)
  • 3–4 cores → only if profiling shows CPU bottlenecks

Why this matters:
The emulator runs inside a virtualized environment. Giving it too many cores can increase context switching and hurt overall system responsiveness.

Rule of thumb:
If your host machine slows down, your emulator will too.

RAM Allocation: Avoid Starving the Host

This is where people usually overdo it.

  • Start with 2–4 GB
  • Increase only if you see real issues (UI lag, memory errors)

Giving the emulator too much RAM can slow down everything else on your system, which ends up hurting performance overall.

VM Heap Size

This one gets confused with RAM, but it’s not the same thing.

VM Heap controls how much memory an app inside the emulator can use, not the emulator itself.

  • Default value is usually fine
  • Increase only if you’re testing memory-heavy apps (large bitmaps, video, complex Compose UIs)

If you set it too high without a reason:

  • You won’t see real benefits
  • You may hide memory issues that show up on real devices

Practical note:
 If your app only runs after increasing VM Heap, that’s a signal to fix memory usage, not raise limits.

Watch for:

  • OutOfMemoryError
  • Frequent GC activity in Logcat
  • UI stutter caused by memory pressure

System Images: x86_64 vs ARM (Context Matters in 2026)

For most desktop environments:

  • x86_64 images → still the default for performance

However:

  • On Apple Silicon (ARM hosts), ARM images can perform better due to reduced translation overhead.

Takeaway:
Choose the image based on your host architecture, not habit.

Hardware Acceleration: Non-Negotiable

Without hardware acceleration, nothing else will save you.

  • Windows → WHPX / Hyper-V
  • Linux → KVM
  • macOS → Hypervisor.framework

If virtualization isn’t enabled in BIOS/UEFI, performance will collapse.

GPU Rendering: Prefer Hardware, Validate When Needed

Set graphics to:

  • Hardware (GLES 2.0 or 3.0)

This improves:

  • UI responsiveness
  • Frame rendering
  • Animation smoothness

When to switch to software:

  • Debugging rendering issues
  • Investigating device-specific GPU bugs

Resolution and Device Profile

Higher resolution increases GPU load.

Practical setup:

  • Use 720p or 1080p for daily development
  • Use higher resolutions only for layout validation

Avoid treating the emulator like a flagship device unless required.

Quick Boot vs Cold Boot: Know the Trade-Off

Quick Boot is convenient, but not always safe.

Use Quick Boot when:

  • Iterating during development
  • You need faster startup

Use Cold Boot when:

  • Running tests
  • Debugging inconsistent behavior
  • Working in CI environments

Snapshots can introduce subtle state issues that are hard to trace.

Settings for CI/CD and Teams Environments

This is where things usually break if you’re not careful.

Headless Emulator Configuration

In CI, always run the emulator in headless mode:

emulator -avd Pixel_API_34 -no-window -no-audio -no-boot-anim

Why:

  • Reduces resource usage
  • Improves startup time
  • Avoids GPU dependency issues

This reduces overhead and avoids GPU-related issues in CI.

Avoid Snapshots in CI

Snapshots make runs inconsistent.

  • Always use a clean start
  • Prefer cold boot

Consistency matters more than startup time in pipelines.

Resource Limits

If you’re running multiple emulators:

  • Stick to 2 cores and ~2 GB RAM per instance
  • Avoid running heavy builds on the same machine at the same time

Otherwise, performance becomes unpredictable.

Storage Still Matters

Run the emulator on an SSD.

You’ll notice faster:

  • Boot times
  • App installs
  • General responsiveness

On HDDs, even a well-configured setup will feel slow.

Internal Storage vs Expanded Storage

This part is easy to overlook, but it matters depending on what you’re testing.

Internal Storage (Data Partition):

  • Used by apps for installs, cache, databases
  • Affects app install speed and runtime behavior

Expanded Storage (SD Card):

  • Simulates external storage
  • Used for media, file access, downloads

What to do in practice:

  • Keep internal storage reasonable (2–6 GB) for most dev work
  • Increase this only if you frequently install large apps or test media-heavy use cases.
  • Use expanded storage only if your app relies on file APIs, media handling, or scoped storage.

Why this matters: Over-allocating storage doesn’t improve performance. It just increases disk usage and snapshot size.

For most workflows, default values are fine unless you have a specific need.

Kotlin Example: Detecting Emulator

Sometimes you want to adjust behavior when running on an emulator.

Kotlin
fun isProbablyEmulator(): Boolean {
    val fingerprint = android.os.Build.FINGERPRINT.lowercase()
    val model = android.os.Build.MODEL.lowercase()
    val manufacturer = android.os.Build.MANUFACTURER.lowercase()
    val brand = android.os.Build.BRAND.lowercase()
    val device = android.os.Build.DEVICE.lowercase()

    return (fingerprint.startsWith("generic") ||
        fingerprint.contains("vbox") ||
        model.contains("emulator") ||
        manufacturer.contains("genymotion") ||
        (brand.startsWith("generic") && device.startsWith("generic")))
}

When to Use This

This works for:

  • Debug toggles
  • Logging changes
  • Small performance adjustments

Don’t use it for anything security-related. It’s not reliable enough.

Common Pitfalls in Real Projects

These show up often in production teams:

  • Over-allocating CPU/RAM and slowing the host
  • Relying entirely on emulators instead of real devices
  • Using snapshots in automated testing
  • Ignoring host machine constraints
  • Running multiple heavy processes alongside the emulator

What This Guide Doesn’t Replace

Even with perfect Android Emulator Settings, you still need:

Real Device Testing

Emulators don’t fully replicate:

  • Thermal throttling
  • OEM customizations
  • Real GPU behavior

Performance Profiling

Use tools like:

  • Android Profiler
  • Systrace
  • Frame timing metrics

Tuning without measurement is guesswork.

Test Strategy

A solid setup includes:

  • Emulator for fast iteration
  • Real devices for validation
  • Cloud testing (e.g., device farms) for scale

A Reliable Baseline Configuration

If you want something that works in most environments:

  • CPU: 2 cores
  • RAM: 2–4 GB
  • Graphics: Hardware (GLES 2.0/3.0)
  • System Image: x86_64 (or ARM on Apple Silicon)
  • Storage: SSD
  • Boot Mode: Use Quick Boot for development and Cold Boot for CI.

This setup prioritizes stability over raw speed.

FAQs

What are the best Android Emulator settings in 2026?

Use hardware acceleration, x86_64 (or ARM on Apple Silicon), 2–4 GB RAM, and hardware GPU rendering.

How many CPU cores should I use?

Start with 2. Increase only if you actually need more.

Is Quick Boot safe?

Fine for development. For testing or CI, use cold boot.

Do I still need real devices?

Yes. Emulators don’t fully match real-world behavior.

Conclusion

There’s no perfect configuration that works for every setup.

The goal is simple: keep your system responsive and your emulator predictable.

Once your Android Emulator Settings are in a good place, you’ll spend less time waiting and more time building.

new ripple api in jetpack compose

New Ripple API in Jetpack Compose: What Changed and How to Use It (Complete Guide)

Jetpack Compose continues to evolve, and one of the most interesting updates is the new Ripple API. If you’ve been building modern Android UIs, you’ve probably used ripple effects to give users visual feedback when they tap on buttons, cards, or other interactive elements. That subtle wave animation plays a big role in making interactions feel responsive and intuitive.

With the latest updates, Google has refined how ripple indications work in Compose. The new approach makes ripple effects more efficient, more customizable, and better aligned with Material Design 3.

In this article, we’ll explore what changed, why these updates matter, and how you can start using the new Ripple API in Jetpack Compose in your apps.

What’s Covered in This Guide

We’ll walk through:

  • What ripple effects are in Jetpack Compose
  • Why the ripple API was updated
  • How the new Ripple API works
  • How to implement it using Kotlin
  • Best practices for customizing ripple behavior

By the end, you’ll have a clear understanding of how the new ripple system works and how to apply it effectively in your Compose UI.

What Is Ripple in Jetpack Compose?

Ripple is the touch feedback animation shown when a user taps or presses a UI component.

For example:

  • Buttons
  • Cards
  • List items
  • Icons
  • Navigation items

When the user taps an element, a circular wave spreads from the touch point.

This animation improves:

  • User experience
  • Accessibility
  • Visual feedback
  • Interaction clarity

In Material Design, ripple is the default interaction effect.

In Jetpack Compose, ripple is typically used with clickable modifiers.

Kotlin
Modifier.clickable { }

By default, this modifier automatically adds ripple feedback.

Why the Ripple API Changed

For a long time, ripple effects in Jetpack Compose were implemented through the Indication system, typically using rememberRipple(). While this approach worked well, it came with a few limitations.

Composition overhead:
 Since rememberRipple() was a composable function, it participated in the recomposition cycle. In some cases, this introduced unnecessary overhead for something that should ideally remain lightweight.

Memory usage:
 Each usage created new state objects, which could increase memory usage when ripple effects were applied across many UI components.

Tight coupling with Material themes:
 The implementation was closely tied to Material 2 and Material 3. This made it less flexible for developers building custom design systems or UI frameworks.

To address these issues, the ripple implementation has been redesigned using the Modifier.Node architecture. This moves ripple handling closer to the rendering layer, allowing it to be drawn more efficiently without triggering unnecessary recompositions.

As a result, the updated API makes ripple behavior:

  • More performant
  • More consistent with Material 3
  • Easier to customize
  • Better aligned with the modern Indication system

Overall, this change simplifies how ripple effects are handled while improving performance and flexibility for Compose developers.

Old Ripple Implementation (Before the Update)

Before the New Ripple API in Jetpack Compose, developers often used rememberRipple().

Kotlin
Modifier.clickable(
    indication = rememberRipple(),
    interactionSource = remember { MutableInteractionSource() }
) {
    // Handle click
}
  • indication → defines the visual feedback
  • rememberRipple() → creates ripple animation
  • interactionSource → tracks user interactions (press, hover, focus)

Although this worked well, it required extra setup for customization.

The New Ripple API in Jetpack Compose

The New Ripple API in Jetpack Compose simplifies ripple creation and aligns it with Material3 design system updates.

The ripple effect is now managed through Material ripple APIs and better indication handling.

In most cases, developers no longer need to manually specify ripple.

Default Material components automatically apply ripple.

Kotlin
Button(onClick = { }) {
    Text("Click Me")
}

This button already includes ripple.

However, when working with custom layouts, you may still need to configure ripple manually.

Key Changes from Old to New

Key changes in Compose Ripple APIs (1.7+)

  • rememberRipple() is deprecated. Use ripple() instead.
     The old API relied on the legacy Indication system, while ripple() works with the new node-based indication architecture.
  • RippleTheme and LocalRippleTheme are deprecated.
     Material components no longer read LocalRippleTheme. For customization use RippleConfiguration / LocalRippleConfiguration or implement a custom ripple.
  • Many components now default interactionSource to null, allowing lazy creation of MutableInteractionSource to reduce unnecessary allocations.
  • The indication system moved to the Modifier.Node architecture.
     Indication#rememberUpdatedInstance was replaced by IndicationNodeFactory for more efficient rendering.

Key Differences at a Glance:

Basic Example Using the New Ripple API

Let’s start with a simple example by creating a clickable Box with a ripple effect. This demonstrates how touch feedback appears when a user interacts with a UI element.

Before looking at the new approach, here’s how ripple was typically implemented in earlier versions of Compose.

Old implementation (Deprecated):

Kotlin
Box(
    modifier = Modifier.clickable(
        onClick = { /* action */ },
        interactionSource = remember { MutableInteractionSource() },
        indication = rememberRipple()
    )
) {
    Text("Tap me!")
}

The previous implementation relied on rememberRipple(), which has now been replaced by the updated ripple API.

Using the New Ripple API:

Here’s how you can implement the same behavior using the updated ripple system.

Kotlin
@Composable
fun RippleBox() {
    val interactionSource = remember { MutableInteractionSource() }  // Or pass null to lazy-init   
    
    Box(
        modifier = Modifier
            .size(120.dp)
            .background(Color.LightGray)
            .clickable(
                interactionSource = interactionSource,
                indication = ripple(), // From material3 or material
                onClick = {}
            )
    ){
      Text("Tap me!")
    }
}

In many cases you can simply pass interactionSource = null, which allows Compose to lazily create it only when needed.

Understanding the Key Components

MutableInteractionSource

Kotlin
val interactionSource = remember { MutableInteractionSource() }

MutableInteractionSource emits interaction events such as:

  • Press
  • Focus
  • Hover
  • Drag

Indications like ripple observe these events to trigger animations.

clickable modifier

Kotlin
Modifier.clickable()

This makes the composable interactive and triggers ripple on tap.

ripple()

Kotlin
indication = ripple()

ripple() is the new ripple API in Jetpack Compose and replaces the deprecated rememberRipple() implementation.

By default:

  • The ripple color is derived from MaterialTheme
  • The ripple originates from the touch point
  • The ripple is bounded within the component by default

Unlike the previous API, ripple() is not a composable function and works with the newer Modifier.Node-based indication system, which reduces allocations and improves performance.

Benefits of the New Ripple API

The updated API offers several improvements:

  • Simpler API — fewer concepts to manage
  • Better performance — avoids unnecessary recompositions
  • Cleaner syntax — easier to read and maintain
  • More flexibility for modern Compose UI architectures

Customizing Ripple in Jetpack Compose

One advantage of the New Ripple API in Jetpack Compose is easier customization.

You can modify:

  • color
  • radius
  • bounded/unbounded ripple

Example: Changing Ripple Color

Kotlin
.clickable(
    interactionSource = interactionSource,
    indication = ripple(
        color = Color.Red
    ),
    onClick = {}
)

Here we customize the ripple color.

When the user taps the component, the ripple will appear red instead of the default theme color.

Example: Unbounded Ripple

By default, ripple is bounded, meaning it stays inside the component.

If you want ripple to spread outside the element:

Kotlin
indication = ripple(
    bounded = false
)

Use Cases

Unbounded ripple works well for:

  • floating action buttons
  • icon buttons
  • circular elements

Example: Setting Ripple Radius

You can also control ripple size.

Kotlin
indication = ripple(
    radius = 60.dp
)

The radius defines how far the ripple spreads from the touch point.

This can help match custom UI designs.

Advanced Customization: RippleConfiguration

If you want to change the color or the alpha (transparency) of your ripples globally or for a specific part of your app, the old LocalRippleTheme is out (deprecated). Instead, we use LocalRippleConfiguration.

The modern approach uses RippleConfiguration and LocalRippleConfiguration. This allows you to customize ripple appearance for a specific component or subtree of your UI.

Example: Custom Ripple

Kotlin
val myCustomRippleConfig = RippleConfiguration(
    color = Color.Magenta,
    rippleAlpha = RippleAlpha(
        pressedAlpha = 0.2f,
        focusedAlpha = 0.2f,
        draggedAlpha = 0.1f,
        hoveredAlpha = 0.4f
    )
)

CompositionLocalProvider(
    LocalRippleConfiguration provides myCustomRippleConfig
) {
    Button(onClick = { }) {
        Text("I have a Magenta Ripple!")
    }
}

RippleConfiguration

A configuration object that defines the visual appearance of ripple effects.

RippleAlpha

Controls the ripple opacity for different interaction states:

  • pressedAlpha
  • focusedAlpha
  • draggedAlpha
  • hoveredAlpha

CompositionLocalProvider

Wraps a section of UI and provides a custom ripple configuration to all child components that read LocalRippleConfiguration.

Disabling Ripple

You can disable ripple effects completely:

Kotlin
CompositionLocalProvider(LocalRippleConfiguration provides null) {
    Button(onClick = {}) {
        Text("No ripple")
    }
}

When You Do NOT Need to Use Ripple Manually

With the new ripple API in Jetpack Compose, many Material components already include ripple feedback by default. This means you usually don’t need to manually specify indication = ripple().

Examples include:

  • Button
  • Card (clickable version in Material3)
  • ListItem
  • IconButton
  • NavigationBarItem

These components internally handle interaction feedback using the ripple system.

Kotlin
Card(
    onClick = { }
) {
    Text("Hello")
}

In Material3, providing onClick automatically makes the Card clickable and displays the ripple effect.

No manual ripple indication is required.

Best Practices for Using the New Ripple API in Jetpack Compose

1. Prefer Default Material Components

Material components already include ripple behavior.

This keeps UI consistent with Material Design.

2. Avoid Over-Customizing Ripple

Too much customization can create inconsistent UX.

Stick with theme defaults unless necessary.

3. Use interactionSource = null Unless You Need It

In modern Compose versions, you usually do not need to create a MutableInteractionSource manually.

Kotlin
Modifier.clickable(
    interactionSource = null,
    indication = ripple(),
    onClick = { }
)

Passing null allows Compose to lazily create the interaction source only when required.

Create your own MutableInteractionSource only if you need to observe interaction events.

4. Keep Ripple Bounded for Most UI

Bounded ripples keep the animation inside the component bounds and generally look cleaner.

This is the default behavior for most Material components.

Use unbounded ripple only when the design specifically requires it.

Performance Improvements in the New Ripple API

The new ripple implementation in Jetpack Compose introduces several internal improvements.

Reduced allocations

Ripple now uses the Modifier.Node architecture, which reduces object allocations compared to the older implementation.

Improved rendering efficiency

Ripple drawing is handled through node-based modifiers, making the rendering lifecycle more efficient.

Updated Indication system

Ripple is now implemented using IndicationNodeFactory, which replaces the older Indication implementation that relied on rememberUpdatedInstance.

Common Mistakes Developers Make

Using old rememberRipple()

Many developers still use:

Kotlin
rememberRipple()

This API is now deprecated.

Use the modern API instead:

Kotlin
ripple()

Manually creating InteractionSource unnecessarily

Older examples often include:

Kotlin
interactionSource = remember { MutableInteractionSource() }

In modern Compose versions, you can usually pass:

Kotlin
interactionSource = null

This allows Compose to lazily create the interaction source only when needed.

Create your own MutableInteractionSource only when you need to observe interaction events.

Adding Ripple to Non-clickable UI

Ripple should be used only on interactive components such as buttons, cards, or clickable surfaces.

Using ripple on static UI elements can create confusing user experiences.

Migration Guide: Old API to New Ripple API

Old implementation:

Kotlin
Modifier.clickable(
    interactionSource = remember { MutableInteractionSource() },
    indication = rememberRipple(),
    onClick = {}
)

New implementation:

Kotlin
Modifier.clickable(
    interactionSource = null,
    indication = ripple(),
    onClick = {}
)

Key changes:

  • rememberRipple() → replaced with ripple()
  • interactionSource can now be null, allowing Compose to lazily create it when needed

This simplifies the code and avoids unnecessary allocations.

If you need to observe interaction events, you can still provide your own MutableInteractionSource.

Conclusion

The New Ripple API in Jetpack Compose simplifies how developers implement touch feedback while improving performance and consistency.

Key takeaways:

  • Ripple provides visual feedback for user interactions
  • The new API replaces rememberRipple() with ripple()
  • Material components already include ripple by default
  • Custom components can easily add ripple using Modifier.clickable
  • The updated system improves performance and flexibility

If you build modern Android apps with Jetpack Compose, understanding the New Ripple API in Jetpack Compose is essential for creating responsive and user-friendly interfaces.

.aiexclude File

How to Use the .aiexclude File in Android Studio for Safer AI Context Sharing

AI-powered coding assistants have completely changed how Android developers write code. Features like Gemini in Android Studio read your project files, understand your codebase structure, and suggest intelligent completions. That’s incredibly helpful — but it also raises an important question:

What exactly is the AI reading?

The answer is often “more than you think.” Configuration files, API keys stored in local .properties files, internal endpoint URLs, analytics tokens — all of these can end up in the AI’s context window if you’re not careful.

That’s exactly where the .aiexclude file comes in. It’s Android Studio’s answer to the .gitignore file, but instead of telling Git what to ignore, it tells the AI assistant what files should stay completely off-limits.

In this guide, we’ll walk you through everything you need to know about the .aiexclude file — what it is, why it matters, how to create and configure it, and real-world patterns to protect your project.

What Is the .aiexclude File?

The .aiexclude file is a plain text configuration file that tells Android Studio’s AI features which files and folders it should never index, read, or use as context when generating suggestions.

Think of it like a privacy wall between your sensitive project files and the AI. When a file is listed in the .aiexclude file, it simply becomes invisible to the AI — it won’t factor into any code completions, refactoring suggestions, or AI-assisted search results.

This feature was introduced as developers started using AI assistants more deeply in their workflows and needed a simple, declarative way to control what data gets shared.

Why Does This Matter?

Here’s a realistic scenario: You’re building a fintech app. You have a local.properties file with a Stripe API key sitting in your project root. Your .gitignore already excludes it from version control. But your AI assistant doesn’t know about .gitignore — it reads every file it can find in your project.

Without a .aiexclude file, that API key could end up in the AI’s context. With one, you can ensure it’s never touched.

Where to Place the .aiexclude File

The .aiexclude file can live in two places, and the location determines its scope:

1. Project root directory — Applies rules across the entire project.

Plaintext
MyAndroidApp/
├── .aiexclude          ← covers the whole project
├── app/
│   └── src/
├── local.properties
└── build.gradle

2. Inside a specific module or subdirectory — Applies rules only to that folder and its contents.

Plaintext
MyAndroidApp/
├── app/
│   ├── .aiexclude      ← covers only the /app module
│   └── src/
└── secrets/
    ├── .aiexclude      ← covers only /secrets
    └── api_keys.txt

You can even have multiple .aiexclude files in the same project, one per folder, with each one managing its own exclusion rules. They all work together, so there’s no conflict — Android Studio respects all of them.

How the .aiexclude File Syntax Works

The .aiexclude file uses a simple pattern syntax, very similar to .gitignore. Let’s break it down.

Basic File Exclusion

To exclude a specific file, just write its name or path:

Plaintext
# Exclude a specific file in the same directory
local.properties

# Exclude a file using a relative path
config/secrets.json

The # character starts a comment — anything after # on that line is ignored by the parser.

Excluding Entire Directories

Add a trailing slash / to target a whole folder:

Plaintext
# Exclude the entire secrets folder
secrets/

# Exclude a nested folder
app/src/main/assets/private/

Every file inside that folder — regardless of name or extension — becomes invisible to the AI.

Wildcard Patterns

Wildcards are your best friends here. The .aiexclude file supports standard glob patterns:

Plaintext
# Exclude all .properties files anywhere in the project
**/*.properties

# Exclude all JSON files in the config directory
config/*.json

# Exclude all files that start with "key_"
**/key_*

# Exclude everything inside any folder named "internal"
**/internal/**

The ** pattern means “match any number of directories,” so **/*.env would catch .env files no matter how deeply nested they are.

Negation with !

You can un-exclude something that was already covered by a broader rule, using !:

Plaintext
# Exclude all .properties files
**/*.properties

# ...but allow gradle.properties back in (it has no secrets)
!gradle.properties

Just like .gitignore, order matters here — later rules override earlier ones. So always put the negation after the broader exclusion.

Creating Your First .aiexclude File

Let’s walk through setting up a .aiexclude file from scratch in a typical Android project.

Step 1: Create the File

Right-click the project root in Android Studio’s Project view, select New → File, and name it exactly:

.aiexclude

No extension. No prefix. Just .aiexclude.

Tip: If you’re on Windows and File Explorer is hiding files starting with a dot, use Android Studio’s built-in file creation — it handles this correctly.

Step 2: Add Your Exclusion Rules

Open the newly created .aiexclude file and start adding your rules. Here’s a practical starter template:

Plaintext
# ─────────────────────────────────────────────
# .aiexclude — AI context exclusion rules
# Keeps sensitive and irrelevant files out of
# Android Studio's AI assistant context.
# ─────────────────────────────────────────────

# Local configuration with API keys or secrets
local.properties
*.env
*.env.*

# Keystores and signing credentials
**/*.jks
**/*.keystore
keystore.properties

# Service account and OAuth credential files
google-services-staging.json
**/credentials/
**/service_account*.json

# Internal analytics or experiment configs
**/internal_experiments/
**/ab_test_config.json

# Build outputs - not useful for AI context
build/
**/build/
.gradle/

# Auto-generated files (reduce AI noise)
**/generated/
**/*Generated.java
**/*Generated.kt

# Raw data or large asset files
**/raw/
**/*.csv
**/*.sqlite
**/*.db

# Private documentation
docs/internal/
INTERNAL_NOTES.md

Step 3: Verify It’s Working

After saving the .aiexclude file, restart Android Studio or invalidate caches (File → Invalidate Caches / Restart). The AI assistant should now skip the excluded files entirely when generating suggestions.

You can confirm this by checking whether the AI references any content from an excluded file — it shouldn’t.

Real-World Use Cases: What to Exclude and Why

Here are common scenarios where the .aiexclude file becomes genuinely essential.

Use Case 1: Protecting API Keys in local.properties

The local.properties file is the most common place Android developers store sensitive keys — Maps API keys, Firebase project IDs, payment gateway tokens. It’s excluded from Git, but not from AI by default.

Plaintext
# .aiexclude

# Keep the AI away from local config with secrets
local.properties
keystore.properties

Why this matters: If the AI reads local.properties, it might include your API key in a generated code snippet or log statement — even innocently, in a test file it suggests.

Use Case 2: Excluding Generated Code

Generated files (like Room database implementations, Hilt component files, or proto-generated classes) create a lot of noise for the AI. The AI might try to “help” by referencing or modifying them, even though they’re auto-generated and will be overwritten on the next build.

Plaintext
# .aiexclude

# Auto-generated files - don't waste AI context on these
**/generated/
**/*_Impl.kt
**/*.pb.java

Generated files can confuse the AI or cause it to suggest changes to code that isn’t meant to be manually edited. Excluding them improves suggestion quality.

Use Case 3: Excluding Proprietary Business Logic

Maybe you’re working on a module that contains proprietary algorithms or confidential business logic — something your company doesn’t want indexed anywhere outside of approved systems.

Plaintext
# .aiexclude placed inside /pricing-engine module

# Protect proprietary pricing logic from AI indexing
algorithms/
models/pricing/

Even if you trust the AI tool itself, having strict boundaries on what it accesses is good security hygiene — especially in regulated industries.

Use Case 4: Large Files That Hurt Performance

The AI doesn’t need to read a 50MB SQLite database file or a massive CSV dataset. Including them wastes AI context budget and can slow things down.

Plaintext
# .aiexclude

# Large files that don't help the AI at all
**/*.sqlite
**/*.db
**/*.csv
assets/large_dataset.json

AI context windows have limits. Keeping them focused on actual source code means better, more relevant suggestions.

Common Mistakes to Avoid with the .aiexclude File

Even experienced developers make these slip-ups when first working with the .aiexclude file. Here’s what to watch out for.

Mistake 1: Using Absolute Paths

This won’t work as expected:

Plaintext
# Absolute paths don't work
/Users/yourname/AndroidStudioProjects/MyApp/local.properties

Always use relative paths from the location of the .aiexclude file itself:

Plaintext
# Correct — relative path
local.properties

Mistake 2: Forgetting Subdirectories

This only excludes secrets.json at the root level:

Plaintext
# Only matches root-level file
secrets.json

If the file might exist deeper in the project:

Plaintext
# Matches the file anywhere in the project
**/secrets.json

Mistake 3: Not Committing the .aiexclude File to Version Control

Unlike local.properties, the .aiexclude file itself is not sensitive — it just describes what’s sensitive. You should absolutely commit it to Git so your whole team benefits from the same exclusion rules.

Plaintext
git add .aiexclude
git commit -m "Add .aiexclude to protect sensitive files from AI context"

Mistake 4: Over-Excluding Everything

It can be tempting to exclude huge chunks of your project “just to be safe,” but that defeats the purpose of the AI assistant. If the AI can’t see your code, it can’t help you write better code.

Be selective. Exclude what’s genuinely sensitive or noisy — not everything.

The .aiexclude File vs. .gitignore: What’s the Difference?

People often ask whether these two files overlap. Here’s a clear side-by-side comparison:

They’re complementary, not replacements for each other. A file can be in .gitignore but still readable by the AI — that’s the exact problem the .aiexclude file solves.

Team Workflow: Making .aiexclude a Team Standard

If you’re leading a team, the .aiexclude file should be part of your project setup checklist — right alongside .gitignore and EditorConfig.

Here’s how to make it a team standard:

Add it to your project template. If your team uses a custom Android project template (or a cookiecutter script), bake in a sensible default .aiexclude file from day one.

Include it in your code review checklist. When a new secret, config, or sensitive module gets added to the project, verify the .aiexclude file is updated accordingly.

Document it in your README. A single line in your project’s README explaining that the project uses a .aiexclude file helps new team members understand the setup quickly.

Treat it like a security document. Additions to the .aiexclude file should go through a quick review — just like changes to SECURITY.md or secrets management configs.

Advanced Pattern: Module-Level .aiexclude Files

In larger, multi-module Android projects, it often makes more sense to manage exclusions at the module level rather than maintaining one giant .aiexclude file at the root.

Plaintext
MyAndroidApp/
├── .aiexclude                    ← project-wide rules
├── app/
│   ├── .aiexclude                ← app module rules
│   └── src/
├── feature-payments/
│   ├── .aiexclude                ← payment module rules (strictest)
│   └── src/
└── feature-onboarding/
    └── src/

Project-root .aiexclude:

Plaintext
# Global rules
local.properties
**/*.jks
**/*.keystore
**/build/

feature-payments/.aiexclude:

Plaintext
# Extra-strict for this module — payment logic is proprietary
src/

This hierarchical approach gives you fine-grained control without cluttering a single file.

Frequently Asked Questions About the .aiexclude File

Q: Does the .aiexclude file affect code completion outside of AI features?

No. The .aiexclude file only affects the AI assistant. Standard IntelliJ code completion, navigation, and refactoring tools are not impacted.

Q: Can I use the .aiexclude file in other JetBrains IDEs?

The .aiexclude file was introduced in the context of Android Studio’s AI integration. Support in other JetBrains IDEs may vary — check the documentation for the specific IDE.

Q: What happens if I have conflicting rules between two .aiexclude files?

Each .aiexclude file applies to its own directory and below. There’s no true “conflict” — rules from parent and child directories stack together. The most specific rule (closest to the file) generally wins, similar to .gitignore behavior.

Q: Will the .aiexclude file protect me from ALL data leakage?

The .aiexclude file is a strong first line of defense for local AI features in Android Studio. However, it does not control what happens when you use external AI tools, paste code into chat interfaces, or use other plugins. Treat it as one layer of a broader security practice.

Q: Should I exclude google-services.json?

It depends. The google-services.json that goes into your app usually contains project IDs and API keys. While it’s not as sensitive as a private key, it’s worth excluding it from AI context — especially the production variant. You might do this:

Plaintext
# .aiexclude
google-services.json
app/google-services.json

Quick Reference: Recommended Default .aiexclude Template

Copy this into any Android project and customize as needed:

Plaintext
# ─────────────────────────────────────
# .aiexclude — Recommended Default
# Android Studio AI Context Exclusions
# ─────────────────────────────────────

# Secrets and local config
local.properties
keystore.properties
*.env
*.env.*
.env.local

# Signing keystores
**/*.jks
**/*.keystore

# Firebase and Google service files
google-services.json
GoogleService-Info.plist

# Service accounts and credentials
**/credentials/
**/service_account*.json

# Build artifacts
**/build/
.gradle/
**/.gradle/

# Auto-generated code
**/generated/
**/*Generated.java
**/*Generated.kt
**/*_Impl.kt

# Large binary or data assets
**/*.sqlite
**/*.db
**/*.csv
**/*.parquet

# Internal documentation
docs/internal/
INTERNAL*.md
CONFIDENTIAL*

# IDE-specific artifacts
.idea/workspace.xml
.idea/tasks.xm

Conclusion

The .aiexclude file is a small file with a big impact. In just a few lines, it lets you control exactly what your AI assistant sees — keeping sensitive keys, proprietary logic, and noisy generated files out of its context while letting it focus on the code that actually matters.

Here’s a quick recap of what we covered:

  • The .aiexclude file acts like a privacy filter between your project and the AI assistant in Android Studio.
  • Place it in your project root for global rules, or in subdirectories for module-level control.
  • It uses glob-style patterns very similar to .gitignore.
  • Always commit it to version control so your whole team benefits.
  • Combine it with other security practices — it’s one layer, not a complete solution.

If you haven’t added a .aiexclude file to your Android project yet, now’s the time. Open Android Studio, create the file, drop in the template above, and customize it for your project’s needs. 

It takes five minutes and pays dividends in security, performance, and peace of mind.

error: Content is protected !!