The role
As part of IgniteTech’s Strategic Services team, I lead a 25-plus-person practice delivering AI-native services across eleven-plus enterprise accounts — community management, social engagement, and the AI-enabled services layer built on top of Khoros. The through-line of the work is specific: help enterprise customers get real, measurable outcomes from AI, not just a chatbot grafted onto an older workflow.
The portfolio
The practice manages a portfolio exceeding $11.5 million in annual contract value, with 88-percent-plus retention across a client list that spans consumer, telecom, media, energy, and life sciences. Each account has its own operating rhythm — its own moderation cadence, escalation paths, and reporting shape — and a large part of my job is making sure those rhythms don’t drift, even as the team scales across three continents.
Clients
- ExxonMobil
- SKY UK
- Telenet
- Meta
- Disney
- British Telecom
- Orange France
- Sinch UK
- Astellas
The transformation
Shortly after I joined, I started on what’s become the largest structural change in the practice: moving the service model from time-and-materials to delivery-based pricing. That’s a pricing change on paper and an operational redesign in practice — everything from how teams scope work, to how we report status, to what counts as a unit of output had to be rethought.
The transformation is projected to produce double-digit revenue growth and 30-percent faster delivery cycles once fully rolled out. The real win isn’t the top-line number; it’s that the new model forces a conversation about outcomes rather than hours — which is the only honest way to price AI-native work.
The SSCMA Dashboard
Most of the friction we hit early was visibility. Team data was fragmented across tools, which meant leadership decisions had to wait on manual reporting cycles that could eat an entire Friday. I built an internal AI-first operating layer — the SSCMA Dashboard — that centralizes execution data across the practice and automates the reporting that used to require a dedicated headcount.
I won’t share internals of the tool publicly, but directionally: it’s the kind of system that turns an hour-long status meeting into a five-minute scan, and converts a thousand scattered signals into a single, answerable question — where is the team blocked this week, and on whom.
One concrete example
A single engagement that shows how the AI-native posture shows up in practice: the annual moderation report for a global energy major in our portfolio. The historical version was a manual exercise — someone reading through a year of content and coding it by hand. I built a Python pipeline that analyzed 136,000-plus social posts across the year, converting what had been a week of human labor into a repeatable, scalable analytics workflow that can run again next year with a single command.
That’s the move, repeated: find the slow manual thing, automate it well enough that the humans get to do something more interesting next quarter.
The thesis
Stated plainly: services teams have spent the last decade optimizing for throughput under a human-led, AI-assisted model. The next decade belongs to teams that flip the assumption — AI-native by default, with humans in the loop where judgment matters. The work at IgniteTech is, in part, an attempt to build that team in the open.