AI agent use cases
What makes a good use case
Learn the core criteria for identifying strong Ona use cases, from human-initiated workflows to low-review tasks, plus practical tips for working effectively with the agent
The following are factors to consider when looking at use-cases for Ona.
- Human-initiated workflows - A person (e.g. developer) starts the agent with a task that ultimately leads to a report or a pull request
- Low review burden - Is a task that needs minimal human review in follow-up e.g., documentation vs. high-risk code changes
- Single development environment - Is a task that can be performed in a single development environment (even if running in parallel)
- Large context analysis - A task that requires parsing substantial amounts of code e.g., architects querying codebases
Use Case | Why It’s a Good Fit |
---|---|
AI-assisted code generation | Boilerplate that results in a pull request with low review burden |
Customer bug reproduction & fixes | Debugging workflows like recreating database states and implement fixes |
Adding missing error handling | Issues too small for tickets but too big to ignore |
New API endpoint creation | Follows existing architectural patterns and replicating structures |
Feature flag management | Clear, focused changes with low review burden |
Code migrations | Good for focused migrations like language or version updates |
Test generation | Identifying and filling gaps in testing and test coverage |
Query optimization suggestions | Database query performance optimizations |
When evaluating potential Ona use cases, ask: Does this start with a human decision, can happen in a single development environment? Require minimal code review? Or, benefit from deep code understanding? If yes, it’s likely a strong fit for Ona.