
We recently surveyed our engineering team about their experiences working with AI coding assistants. The responses were eye-opening – while everyone agreed that AI has transformed how they build software, the approaches that actually work in practice are quite different from what you might expect.
The engineers who've found the most success aren't those who've mastered prompt engineering or discovered secret AI techniques. Instead, they've learned to structure their development process in ways that amplify AI's strengths while mitigating its weaknesses. They've developed workflows that turn AI from a sometimes-helpful assistant into a reliable development partner.
We've compiled their best insights here – 10 practical tips that can immediately improve how you build with AI. Whether you're shipping your first AI-assisted feature or leading a team that's been using these tools for months, these strategies will help you build better, more maintainable projects faster.
Tip 1: Write a Feature Roadmap That AI Can Execute
Before touching any AI tool, create a step-by-step roadmap. Break your feature into checkpoints: data model defined, API endpoints created, UI components built, tests passing. Each checkpoint should be verifiable – either it works or it doesn't.
Start your AI chat like this:
"I need to implement OAuth login with Google. Let's start by defining the spec together. What are the key components we need to consider for a secure OAuth flow?"
The AI helps you think through edge cases: token refresh, session management, error states. You're not asking for code yet – you're building a shared mental model. Next prompt might be:
"What security considerations should we document before implementing?"
This turn-by-turn approach builds a comprehensive spec, not a half-baked implementation.
Tip 2: Use State Management Libraries to Prevent Cascading Fixes
Pure components with centralized state aren't just good practice – they're essential for AI-assisted development. When state is scattered across components, AI fixes create new bugs. You've seen this: fix the dropdown, break the form. Fix the form, break the validation.
Example: Instead of letting AI generate components with useState
everywhere, define your auth state shape first:
interface AppState {
auth: {
user: User | null;
session: Session | null;
isLoading: boolean;
error: AuthError | null;
};
ui: {
showMFAPrompt: boolean;
loginMethod: 'google' | 'github' | 'email';
};
}
Pick Redux, Zustand, MobX – doesn't matter. What matters: auth state changes happen in one place. When AI generates a fix for session refresh, it modifies a single auth reducer, not scattered useEffect
hooks across your app.
Tip 3: Define Types and Interfaces Before Implementation
Data structures determine whether your codebase scales or strangles itself. When you let AI generate code without type constraints, it invents its own data shapes – inconsistent, implicit, impossible to refactor.
Start every feature by defining your types:
interface AuthToken {
accessToken: string;
refreshToken: string;
expiresAt: number;
scope: string[];
}
interface AuthProvider {
authenticate(credentials: Credentials): Promise<AuthToken>;
refresh(token: string): Promise<AuthToken>;
revoke(token: string): Promise<void>;
}
Now when you prompt:
"implement Google OAuth using the AuthProvider interface"
…the AI works within your constraints. It can't invent a different token structure or skip error handling you've defined. Types become your architectural guardrails.
Tip 4: Test Invariants, Not Implementation Details
Most AI-generated tests are worthless. They test that setUser
was called, not that your auth flow actually works. The problem: AI mimics test patterns it's seen – mocking everything, asserting on internals, creating brittle suites that break with every refactor.
Instead, prompt for invariant testing:
"Write tests that verify these auth invariants:
1. Users cannot access protected routes without valid tokens
2. Expired tokens trigger automatic refresh
3. Failed refresh redirects to login
4. Concurrent requests share the same token refresh"
These tests survive implementation changes. Whether you use Redux or Zustand, fetch or axios, the invariants remain.
Tip 5: Review Every AI Output, Not Just at Checkpoints
AI momentum is dangerous. It generates plausible code quickly, you see green tests, and you keep prompting. Three hours later, you've built a castle on sand.
After each significant feature, force a hard stop:
"STOP HERE. Before continuing:
1. Test the full auth flow manually
2. Verify tokens are stored correctly
3. Check error handling for expired tokens
4. Confirm the UI reflects auth state changes"
Experienced engineers can smell AI-generated code that's been allowed to run wild. Keep it on a short leash.
Tip 6: Let AI Write Commit Messages, But Keep Them Human
Good prompt:
"Write a commit message for these changes. Be concise – one line summary, then 2-3 bullet points of key changes. Focus on what and why, not how."
Result:
feat: Add OAuth token refresh with automatic retry
- Implements exponential backoff for failed refresh attempts
- Stores tokens securely in httpOnly cookies
- Adds middleware to check token expiry before protected routes
Tip 7: Use Error Boundaries and Fallbacks from Day One
Start every feature with error boundaries:
interface ErrorBoundaryState {
hasError: boolean;
error: Error | null;
errorInfo: ErrorInfo | null;
}
Prompt:
"Wrap the auth components in an error boundary that:
1. Catches authentication failures
2. Provides user-friendly error messages
3. Offers recovery actions (retry, contact support)
4. Logs errors to our monitoring service"
Tip 8: Context Window Management Is Your Job
Create a CONTEXT.md
file for your current work:
## Current Auth Implementation
- Using httpOnly cookies for token storage
- Refresh happens in middleware, not components
- All auth state lives in Zustand store
- Error boundaries handle auth failures
## Key Decisions
- Single auth context, no provider nesting
- Automatic retry with exponential backoff
Start each session by having AI read this file. When switching between features, update it.
Tip 9: Structure Logging for Humans and AI
Build comprehensive logging from the start:
logger.info('Auth flow started', {
provider: 'google',
timestamp: Date.now(),
sessionId: generateId()
});
logger.error('Token refresh failed', {
error: err.message,
attemptNumber: retryCount,
lastSuccessfulRefresh: lastRefreshTime,
nextRetryIn: backoffMs
});
Tip 10: Build With the Next Model in Mind
This means:
- Document your invariants, not your implementation
- Write tests that explain your business logic
- Keep your interfaces stable even as internals evolve
- Structure code so better models can understand your intent
The Bottom Line
The teams thriving with AI aren't writing better prompts – they're building better systems. Systems where each piece has a clear purpose, where tests define behavior, where documentation captures decisions. When the next model generation arrives, these codebases will leap forward while others struggle to explain their tangled state to even smarter AI.