Reflections on Coding in the Age of LLMs Link to heading

As we approach the end of 2024, I find myself reflecting on what has been the most transformative year in my programming career since I first learned to code. From getting my GitHub Copilot license in February and setting up Roo Code in VS Code to experimenting with AI agents in November, the integration of Large Language Models into software development has fundamentally changed not just how I write code, but how I think about problems, learn new concepts, and approach software development as a whole.

The Before and After Link to heading

Early 2024: Finding My Workflow Link to heading

At the beginning of the year, I had been using ChatGPT for copy-paste development, but getting my GitHub Copilot license in late February changed everything. My usage evolved from:

  • Asking ChatGPT questions and copying responses
  • Basic explanations of complex algorithms
  • Occasional help with regex patterns
  • Manual context switching between browser and IDE

I treated it as a sophisticated search engine that could give conversational responses rather than a true development partner.

December 2023: Integrated Workflow Link to heading

Now, LLMs are woven throughout my entire development process:

  • Architecture discussions with AI before starting new projects
  • Real-time pair programming for complex implementations
  • Code review assistance for catching issues I might miss
  • Documentation generation that stays current with code changes
  • Learning acceleration for new technologies and patterns
  • Debugging partner for tracking down tricky issues

The change isn’t just quantitative; it’s qualitative. My approach to problem-solving itself has evolved.

Fundamental Shifts in Development Approach Link to heading

From Implementation-First to Design-First Link to heading

Previously, I would often dive straight into coding, figuring out the design as I went:

# Old approach: Start coding and figure it out
def process_orders():
    # Start with basic logic and expand
    for order in orders:
        if order.status == 'pending':
            # Handle payment...
            # Update inventory...
            # Send notifications...
            # Etc.

Now, I spend much more time upfront discussing architecture with AI:

Me: “I need to design a system for processing e-commerce orders that handles payment, inventory, notifications, and error recovery. What architecture would you recommend?”

AI: Suggests a comprehensive approach with event sourcing, saga patterns, and proper error boundaries.

Result: Better-designed systems with fewer architectural surprises later.

From Syntax Searching to Concept Exploration Link to heading

Instead of googling “how to do X in language Y”, I have conceptual discussions:

Old: “How to sort dictionary by value in Python” New: “I need to process user activity data to find the most engaged users. What are the different approaches for ranking and what are their trade-offs for different data sizes and update frequencies?”

This leads to better understanding of underlying concepts rather than just syntax memorisation.

From Feature-by-Feature to System-by-System Link to heading

LLMs excel at understanding entire systems, so I’ve started thinking more holistically:

# Instead of implementing individual endpoints
@app.route('/users', methods=['POST'])
def create_user():
    # Implementation here
    pass

@app.route('/users/<id>', methods=['GET'])
def get_user(id):
    # Implementation here
    pass

# I discuss the entire user management system
"""
Design a comprehensive user management system with:
- Registration and authentication
- Profile management with validation
- Role-based permissions
- Audit logging
- Rate limiting
- Data privacy compliance (GDPR)

Show me the complete architecture, database schema,
API design, and key implementation patterns.
"""

This results in more cohesive, well-integrated systems.

Learning Acceleration Link to heading

Domain Expertise Acquisition Link to heading

In 2023, I tackled several domains I had minimal experience with:

Machine Learning Operations: Built ML pipelines with proper versioning, monitoring, and deployment Blockchain Development: Created smart contracts and DApps Computer Graphics: Implemented 3D rendering algorithms Distributed Systems: Designed fault-tolerant microservice architectures

In each case, LLMs accelerated my learning curve dramatically. What might have taken months of reading documentation and tutorials was compressed into weeks of focused learning with AI guidance.

Pattern Recognition Improvement Link to heading

LLMs exposed me to patterns I wouldn’t have discovered otherwise:

# Learned about the Repository pattern for database abstraction
class UserRepository(ABC):
    @abstractmethod
    async def create(self, user: User) -> User:
        pass

    @abstractmethod
    async def get_by_id(self, user_id: str) -> Optional[User]:
        pass

    @abstractmethod
    async def get_by_email(self, email: str) -> Optional[User]:
        pass

# Discovered the Specification pattern for complex queries
class UserSpecification(ABC):
    @abstractmethod
    def is_satisfied_by(self, user: User) -> bool:
        pass

    def and_(self, other: 'UserSpecification') -> 'UserSpecification':
        return AndSpecification(self, other)

class ActiveUserSpecification(UserSpecification):
    def is_satisfied_by(self, user: User) -> bool:
        return user.is_active and user.last_login > datetime.now() - timedelta(days=30)

Technology Evaluation Skills Link to heading

LLMs helped me develop better frameworks for evaluating new technologies:

Me: “I’m choosing between FastAPI, Django, and Flask for a new API project. Walk me through the decision criteria and trade-offs.”

AI: Provides comprehensive analysis covering performance, ecosystem, learning curve, maintainability, team factors, and specific use case fit.

This systematic approach to technology evaluation has improved decision-making across projects.

Quality and Reliability Improvements Link to heading

Better Error Handling Link to heading

Working with LLMs has dramatically improved my error handling patterns:

# Before: Basic try/catch
try:
    result = external_api.call()
    return result
except Exception as e:
    logger.error(f"API call failed: {e}")
    return None

# After: Comprehensive error handling with AI guidance
from enum import Enum
from dataclasses import dataclass
from typing import Union, Optional
import asyncio

class APIError(Enum):
    TIMEOUT = "timeout"
    RATE_LIMITED = "rate_limited"
    AUTHENTICATION = "authentication_failed"
    VALIDATION = "validation_error"
    SERVER_ERROR = "server_error"
    NETWORK = "network_error"

@dataclass
class APIResult:
    success: bool
    data: Optional[dict] = None
    error: Optional[APIError] = None
    retry_after: Optional[int] = None
    error_details: Optional[str] = None

async def call_external_api_with_retry(
    endpoint: str,
    max_retries: int = 3,
    base_delay: float = 1.0
) -> APIResult:
    """Call external API with exponential backoff retry logic"""

    for attempt in range(max_retries + 1):
        try:
            async with httpx.AsyncClient(timeout=10.0) as client:
                response = await client.get(endpoint)

                if response.status_code == 200:
                    return APIResult(success=True, data=response.json())
                elif response.status_code == 429:
                    retry_after = int(response.headers.get('Retry-After', 60))
                    if attempt < max_retries:
                        await asyncio.sleep(min(retry_after, base_delay * (2 ** attempt)))
                        continue
                    return APIResult(
                        success=False,
                        error=APIError.RATE_LIMITED,
                        retry_after=retry_after
                    )
                elif response.status_code == 401:
                    return APIResult(
                        success=False,
                        error=APIError.AUTHENTICATION,
                        error_details="Invalid or expired credentials"
                    )
                else:
                    return APIResult(
                        success=False,
                        error=APIError.SERVER_ERROR,
                        error_details=f"HTTP {response.status_code}: {response.text}"
                    )

        except asyncio.TimeoutError:
            if attempt < max_retries:
                await asyncio.sleep(base_delay * (2 ** attempt))
                continue
            return APIResult(success=False, error=APIError.TIMEOUT)
        except httpx.NetworkError as e:
            if attempt < max_retries:
                await asyncio.sleep(base_delay * (2 ** attempt))
                continue
            return APIResult(
                success=False,
                error=APIError.NETWORK,
                error_details=str(e)
            )

    return APIResult(success=False, error=APIError.SERVER_ERROR)

Enhanced Testing Strategies Link to heading

AI has made me much more thorough in testing:

# AI suggests comprehensive test scenarios I might miss
class TestAPIRetryLogic:
    @pytest.mark.asyncio
    async def test_successful_response(self, mock_httpx):
        mock_httpx.get.return_value = Mock(status_code=200, json=lambda: {"data": "test"})

        result = await call_external_api_with_retry("http://test.com")

        assert result.success is True
        assert result.data == {"data": "test"}

    @pytest.mark.asyncio
    async def test_rate_limit_retry_logic(self, mock_httpx):
        # First call: rate limited
        # Second call: success
        mock_httpx.get.side_effect = [
            Mock(status_code=429, headers={"Retry-After": "1"}),
            Mock(status_code=200, json=lambda: {"data": "success"})
        ]

        with patch('asyncio.sleep') as mock_sleep:
            result = await call_external_api_with_retry("http://test.com")

        assert result.success is True
        assert mock_sleep.called
        assert mock_httpx.get.call_count == 2

    @pytest.mark.asyncio
    async def test_timeout_handling(self, mock_httpx):
        mock_httpx.get.side_effect = asyncio.TimeoutError()

        result = await call_external_api_with_retry("http://test.com", max_retries=1)

        assert result.success is False
        assert result.error == APIError.TIMEOUT
        assert mock_httpx.get.call_count == 2  # Original + 1 retry

    @pytest.mark.asyncio
    async def test_exponential_backoff_timing(self, mock_httpx):
        mock_httpx.get.side_effect = [
            Mock(status_code=500),
            Mock(status_code=500),
            Mock(status_code=200, json=lambda: {"data": "final_success"})
        ]

        with patch('asyncio.sleep') as mock_sleep:
            result = await call_external_api_with_retry("http://test.com")

        # Verify exponential backoff: 1s, 2s delays
        expected_delays = [1.0, 2.0]
        actual_delays = [call.args[0] for call in mock_sleep.call_args_list]
        assert actual_delays == expected_delays

Changes in Team Dynamics Link to heading

Code Review Evolution Link to heading

Code reviews have become more focused on higher-level concerns:

Before AI: Reviews often caught syntax errors, missing edge cases, and basic architectural issues.

After AI: Reviews focus on:

  • Business logic correctness
  • System integration concerns
  • Performance implications
  • Security considerations
  • Long-term maintainability

AI catches many of the mechanical issues, letting human reviewers focus on strategic concerns.

Knowledge Sharing Transformation Link to heading

Team knowledge sharing has evolved:

Old Pattern: “How do you implement feature X?” New Pattern: “What are the architectural trade-offs for feature X, and why did we choose this approach?”

Discussions have moved from implementation details to design philosophy and system thinking.

Onboarding Acceleration Link to heading

New team members can become productive much faster with AI assistance:

  • Understand codebase architecture through AI conversations
  • Learn project-specific patterns with guided explanation
  • Get up to speed on domain knowledge through targeted discussions
  • Contribute meaningfully within days rather than weeks

Productivity Metrics Link to heading

Quantitative Changes Link to heading

Tracking my productivity throughout 2023:

  • Initial development speed: 2-3x faster for new features
  • Bug fix time: 40% reduction in debugging time
  • Learning new technologies: 60% faster onboarding to new frameworks/languages
  • Documentation: 5x improvement in documentation completeness and quality
  • Code quality: 50% fewer issues found in peer reviews
  • Test coverage: 30% increase in test comprehensiveness

Qualitative Improvements Link to heading

  • Confidence: More willing to tackle unfamiliar domains
  • Creativity: More time for creative problem-solving rather than routine implementation
  • Focus: Less context switching between coding and research
  • Learning: Continuous learning integrated into daily workflow
  • System thinking: Better understanding of how components fit together

Challenges and Adaptations Link to heading

Over-Reliance Prevention Link to heading

I’ve had to develop discipline to maintain my own problem-solving skills:

Practice Routine: Regular “AI-free” coding sessions to maintain core competencies Understanding Verification: Always explain AI-generated code in my own words Alternative Solutions: Ask for multiple approaches to prevent tunnel vision Critical Evaluation: Question AI suggestions and verify assumptions

Context Management Link to heading

Managing AI context has become a key skill:

# Providing rich context for better results
"""
Context: This is a high-frequency trading system where:
- Latency must be <50ms
- We process 100k orders/second
- Data consistency is critical
- Memory usage is constrained (16GB total)
- We use zero-copy networking
- Current stack: Rust, custom TCP protocol, Linux kernel bypass

Task: Implement order matching engine that maintains price-time priority
while minimising memory allocations.

Previous attempt had issues with:
- GC pauses in Java version
- Memory fragmentation in C++ version
- Complex state management

Please suggest an approach that addresses these specific challenges.
"""

Quality Assurance Link to heading

Maintaining code quality with AI assistance:

Review Process: All AI-generated code goes through the same review process as human-written code Testing Standards: AI code requires the same test coverage as manual code Style Consistency: AI suggestions are adapted to match project coding standards Performance Validation: AI-generated code is profiled and optimised as needed

Industry Observations Link to heading

Changing Skill Requirements Link to heading

The most valuable developer skills have shifted:

Increasingly Important:

  • System design and architecture
  • Problem decomposition
  • AI collaboration and prompt engineering
  • Code review and quality assessment
  • Performance analysis and optimisation
  • Security analysis and threat modeling

Less Critical:

  • Syntax memorisation
  • Stack Overflow searching
  • Boilerplate code generation
  • Basic algorithm implementation
  • Simple debugging

Market Evolution Link to heading

The software development job market is evolving:

  • Higher expectations: Developers are expected to deliver more, faster
  • Specialisation: Deeper expertise in specific domains becomes more valuable
  • AI literacy: Understanding AI capabilities and limitations is becoming essential
  • Human skills: Communication, creativity, and strategic thinking are more important

Educational Implications Link to heading

Programming education needs to adapt:

  • Focus on concepts: Less memorisation, more understanding of principles
  • System thinking: Greater emphasis on architecture and design patterns
  • AI collaboration: Teaching how to work effectively with AI tools
  • Problem-solving: Developing skills that complement rather than compete with AI

Looking Ahead: 2024 and Beyond Link to heading

Short-Term Predictions (2024) Link to heading

  • IDE Integration: AI will become seamlessly integrated into all development environments
  • Context Awareness: AI will understand entire codebases and project contexts
  • Real-time Collaboration: Multiple developers working with AI simultaneously on shared codebases
  • Specialised Models: Domain-specific AI models for different types of development

Medium-Term Evolution (2024-2026) Link to heading

  • Autonomous Development: AI agents capable of implementing entire features with minimal guidance
  • Quality Assurance: AI-powered testing and code review that approaches human-level insight
  • Performance Optimisation: Automatic performance analysis and optimisation suggestions
  • Security Integration: AI-powered security analysis integrated into the development workflow

Long-Term Transformation (2026+) Link to heading

  • Natural Language Programming: Describing software in natural language and having AI implement it
  • Self-Evolving Systems: Software that can modify and improve itself based on usage patterns
  • Human-AI Teams: Formal collaboration patterns between human developers and AI agents
  • New Programming Paradigms: Entirely new ways of thinking about software development

Personal Reflections Link to heading

What I’ve Learned Link to heading

  1. AI amplifies human capabilities rather than replacing them
  2. Good prompting is a learnable skill that dramatically improves results
  3. Context is everything: The more context you provide, the better the AI assistance
  4. Critical thinking is more important than ever in evaluating AI suggestions
  5. Learning agility has become the most valuable skill

What I’m Grateful For Link to heading

  • Accelerated Learning: Ability to explore new domains with confidence
  • Creative Freedom: More time for interesting problems rather than routine tasks
  • Better Code Quality: AI catches issues I would miss
  • Continuous Improvement: Daily learning integrated into the development process
  • Problem-Solving Partnership: Having an intelligent collaborator available 24/7

What I’m Cautious About Link to heading

  • Maintaining Core Skills: Ensuring I can still solve problems without AI assistance
  • Understanding Depth: Not accepting AI solutions without understanding them
  • Bias Awareness: Recognising that AI has biases and limitations
  • Security Mindset: Being extra careful about security implications of AI-generated code

Conclusion: A New Chapter in Software Development Link to heading

2023 has been a watershed year in software development. We’ve crossed a threshold where AI has become not just a tool but a genuine collaborator in the creative process of building software. This isn’t just about writing code faster; it’s about thinking about problems differently, learning continuously, and pushing the boundaries of what individual developers can achieve.

The future of programming isn’t about humans versus AI; it’s about humans and AI working together to create better software, solve harder problems, and build more ambitious systems than either could achieve alone.

As I look ahead to 2024, I’m excited about the possibilities while remaining committed to maintaining the human judgment, creativity, and critical thinking that make software development both a science and an art. The tools are changing rapidly, but the fundamental goal remains the same: building software that solves real problems and creates value for people.

The age of LLMs has begun, and we’re all learning to navigate this new landscape together. What a fascinating time to be a developer.


How has 2023 changed your approach to software development? What changes do you anticipate in the coming year, and how are you preparing for them?