Bringing the Workplace Up to Speed: My First AI Presentation Link to heading
Last month, I gave my first formal presentation at work about AI in the workplace. What started as a casual conversation about my productivity gains with Roo Code turned into a 45-minute deep dive for the entire engineering team. The experience taught me as much about organisational change as it did about explaining AI capabilities to skeptical colleagues.
The Setup: A Divided Room Link to heading
Walking into that conference room, I could sense the energy immediately. About a third of the team was genuinely curious, another third seemed skeptical, and the final third appeared to be there only because attendance was “strongly encouraged.” The age distribution told a story too; younger developers were more open, while senior engineers who’d lived through multiple hype cycles wore their skepticism openly.
I opened with a quote that I’d been mulling over for weeks:
“AI will not replace developers. Developers using AI will replace developers that don’t.”
The room went quiet. Not the good kind of quiet; the uncomfortable kind where people are processing something they don’t want to hear.
The Historical Context: It’s Just Another Tool Link to heading
Rather than diving straight into AI capabilities, I started with history. I put up a slide showing the evolution of programming:
1940s-1950s: Punch Cards and Machine Code Link to heading
01001000 01100101 01101100 01101100 01101111
Direct hardware programming, every bit manually specified
1960s-1970s: Assembly Language Link to heading
MOV AX, msg
INT 21h
Symbolic representation of machine instructions
1980s-1990s: High-Level Languages Link to heading
printf("Hello, World!");
Abstract concepts compiled to machine code
2000s-2010s: Interpreted and Managed Languages Link to heading
print("Hello, World!")
Runtime interpretation, garbage collection, frameworks
2020s-2025s: Natural Language Programming Link to heading
Write a function that greets users and handles multiple languages
Intent-based programming with AI translation
“Notice something?” I asked. “The CPU still executes the same 1s and 0s. We’ve just gotten better tools to express our intent.”
The skeptical looks started softening. One senior engineer nodded slowly; he’d lived through several of these transitions.
The Practical Demonstration Link to heading
Theory is one thing, but engineers need to see it work. I opened VS Code with Roo Code and shared my screen.
Live Demo 1: Bug Investigation Link to heading
I pulled up a tricky bug from our backlog; one that had been sitting there for two weeks because nobody wanted to dig through the legacy authentication system.
“Let me show you how I’d approach this today versus six months ago.”
Traditional approach:
- Read through 500 lines of authentication middleware
- Set breakpoints, step through code
- Search Stack Overflow for similar issues
- Spend 3-4 hours understanding the flow
AI-assisted approach:
Roo, analyse this authentication middleware and identify
potential causes for users getting logged out after 15 minutes
despite a 30-minute session timeout configuration.
Within two minutes, Roo had identified the issue: a JWT refresh mechanism that wasn’t accounting for server-side timezone differences. The fix took another five minutes.
The room was paying attention now.
Live Demo 2: Feature Implementation Link to heading
“Let’s build something new,” I said, turning to our project manager. “What’s a small feature we’ve been putting off?”
“The export functionality for the reporting dashboard,” she replied immediately. “It’s been in the backlog for months.”
I opened a new file and started describing what I wanted:
Create an export service that can:
- Export dashboard data as CSV, JSON, or PDF
- Handle large datasets with streaming
- Include proper error handling and logging
- Follow our existing API patterns
- Add appropriate tests
Twenty minutes later, we had a complete implementation with tests, documentation, and integration points. The QA engineer tested it live; it worked.
“This would have taken me at least a full day to research, implement, and test,” I said. “We just did it in twenty minutes.”
Addressing the Concerns Link to heading
“What about code quality?” Link to heading
This was the first question, predictably from our architect.
I showed them the code Roo had generated; clean, well-structured, following our team’s conventions. “The quality is as good as the context you provide,” I explained. “If you give it our style guide, our patterns, and clear requirements, it follows them consistently.”
“What about understanding the code?” Link to heading
From a senior developer: “If the AI writes it, do you really understand what it’s doing?”
“Do you understand every line of the frameworks we use?” I countered. “We work at different levels of abstraction. The key is understanding the intent and behavior, not memorising every implementation detail.”
“What about security?” Link to heading
Our security specialist leaned forward. “Are you sending our proprietary code to OpenAI?”
I explained Roo Code’s architecture; how it uses GitHub Copilot for LLM inference while working with local context without sending sensitive data to external servers, and the various privacy models available. “Just like we evaluate any tool for security implications.”
“What about dependency?” Link to heading
“What happens when the AI service goes down?” asked our DevOps lead.
“Same thing that happens when GitHub goes down, or AWS, or any other service we depend on,” I replied. “We plan for it, we have fallbacks, but we don’t avoid useful tools because of potential downtime.”
The Evolution Argument Link to heading
This was where the presentation really clicked for people. I went back to the historical slide:
“When we moved from assembly to C, did we lose the ability to write assembly? No; we chose not to, because C was more productive. When we adopted frameworks instead of writing everything from scratch, did we lose the ability to write low-level code? No; we chose the more productive path.
“Every generation of developers has faced this choice. The punch card programmers probably said, ‘These young developers with their compilers don’t really understand the machine.’ The assembly programmers probably said, ‘High-level language developers don’t understand what’s really happening.’
“Today, some of us are saying, ‘AI-assisted developers don’t really understand the code.’ But the pattern is the same; new tools, same outcomes, better productivity.”
The Productivity Numbers Link to heading
I shared some real data from my work over the past six months:
Before AI assistance (Q3 2024):
- Average feature completion: 3-5 days
- Time spent on research: 25%
- Time spent on boilerplate: 20%
- Time spent on actual problem-solving: 55%
With AI assistance (Q1 2025):
- Average feature completion: 1-2 days
- Time spent on research: 5%
- Time spent on boilerplate: 5%
- Time spent on actual problem-solving: 90%
“I’m not working longer hours,” I emphasised. “I’m working on higher-value problems. Instead of spending a morning researching API patterns, I spend it designing system architecture. Instead of writing CRUD operations, I’m focusing on business logic.”
The Team’s Varied Reactions Link to heading
The Converts Link to heading
About a third of the team was immediately interested. They started asking about setup, licensing, and how to get started. Our newest developer, fresh out of university, raised her hand: “This is exactly how I learned to code; by describing what I wanted and iterating on the results.”
The Cautious Optimists Link to heading
Another third wanted to try it but had concerns about best practices. “How do we ensure code quality?” “What are the security implications?” “How do we maintain our coding standards?”
These were the engineers I was most excited about; they saw the potential but wanted to implement it responsibly.
The Holdouts Link to heading
The final third remained skeptical. “This is just autocomplete on steroids.” “Real programmers should understand every line they write.” “What happens when the AI makes a mistake?”
I didn’t try to convert the holdouts immediately. Change management isn’t about convincing everyone at once; it’s about creating momentum with early adopters and letting results speak for themselves.
The Implementation Plan Link to heading
By the end of the presentation, we had volunteers for a pilot program:
- Phase 1 (Month 1): Five developers try AI-assisted development on non-critical features
- Phase 2 (Month 2): Measure productivity, code quality, and team satisfaction
- Phase 3 (Month 3): Decide on wider adoption based on results
We established guidelines:
- All AI-generated code must be reviewed like any other code
- Security-sensitive components require extra scrutiny
- Document AI usage in commit messages for transparency
- Track time savings and productivity metrics
Three Weeks Later: Early Results Link to heading
The pilot program has been running for three weeks now. Early results are encouraging:
Productivity Gains:
- 40% reduction in feature implementation time
- 60% reduction in time spent on documentation
- 50% reduction in time spent debugging integration issues
Code Quality:
- No significant difference in bug rates
- Improved consistency in coding patterns
- Better documentation and comments
Team Satisfaction:
- Pilot participants report higher job satisfaction
- More time spent on interesting problems
- Reduced frustration with repetitive tasks
The Resistance Patterns Link to heading
Not everyone is convinced yet. I’ve noticed several resistance patterns:
The “Purity” Argument Link to heading
“If you didn’t write it yourself, do you really understand it?”
My response: “Do you understand the implementation of every library we use? We work at appropriate levels of abstraction.”
The “Job Security” Fear Link to heading
“Won’t this make us obsolete?”
My response: “When spreadsheet software was invented, it didn’t eliminate accountants; it eliminated manual calculation drudgery and let accountants focus on analysis and strategy.”
The “Quality” Concern Link to heading
“AI code is probably buggy and insecure.”
My response: “AI code is as good as the context and review you provide. It’s still our job to understand, review, and validate.”
The “Dependency” Worry Link to heading
“We’re becoming too dependent on external tools.”
My response: “We’re already dependent on compilers, databases, cloud services, and dozens of other tools. The key is understanding the trade-offs.”
What I’ve Learned About Change Management Link to heading
Start with the Willing Link to heading
Don’t try to convert everyone at once. Focus on people who are curious and let their results influence others.
Address Concerns Directly Link to heading
Don’t dismiss skepticism; engage with it. Most concerns are valid and addressing them builds trust.
Show, Don’t Tell Link to heading
Live demonstrations are worth a thousand slides. Let people see the tool working on real problems.
Make it Safe to Experiment Link to heading
Create low-risk environments where people can try new approaches without fear of failure.
Measure and Share Link to heading
Track concrete metrics and share results transparently. Data convinces people that arguments can’t reach.
The Broader Implications Link to heading
This presentation made me realise we’re living through a fundamental shift in software development. It’s not just about productivity; it’s about what we choose to spend our cognitive energy on.
The New Developer Skillset Link to heading
Traditional Skills (Still Important):
- System design and architecture
- Problem decomposition
- Code review and quality assessment
- Understanding business requirements
Emerging Skills (Increasingly Critical):
- Prompt engineering and AI interaction
- Context management and curation
- AI output evaluation and refinement
- Human-AI collaboration patterns
The Generational Divide Link to heading
I’ve noticed an interesting pattern: developers who started their careers in the last 5-10 years adapt more quickly to AI assistance. They’re already comfortable with high levels of abstraction, using frameworks they don’t fully understand, and learning through experimentation.
Experienced developers often struggle more; not because they can’t learn, but because they have deeply ingrained workflows and strong opinions about “the right way” to write code.
Looking Forward Link to heading
The pilot program continues, but I’m already planning the next phase. We’re exploring:
- AI-assisted code review processes
- Automated documentation generation
- AI-powered debugging workflows
- Cross-team knowledge sharing through AI
More importantly, we’re changing how we think about developer productivity. Instead of measuring lines of code or hours worked, we’re focusing on problems solved and business value delivered.
The Evolution Continues Link to heading
Standing in front of my colleagues three weeks ago, I realised I was witnessing the same kind of transition I’d read about in computer science history. The punch card programmers who adapted to compilers. The assembly programmers who embraced high-level languages. The C programmers who adopted frameworks and libraries.
Each transition faced resistance. Each time, the concerns were valid but ultimately outweighed by the benefits. Each time, the developers who adapted early gained advantages that compounded over time.
“AI will not replace developers. Developers using AI will replace developers that don’t.”
This isn’t a threat; it’s a description of how technology adoption works. It’s not different from previous transitions; it just feels different because we’re living through it.
The CPU still executes 1s and 0s. We’ve just found a better way to tell it what we want it to do. And that’s exactly what every generation of developers has done; find better tools to express their intent more effectively.
The evolution continues. The question isn’t whether to adapt, but how quickly.
How is your organisation handling AI adoption? Are you seeing similar resistance patterns, and what strategies have worked for building consensus around new tools?
Footnote: I ran the whole presentation from Gen AI slides Programming in the Age of Prompts