
Value first, then the tool — less friction, more predictability.
Introduction
Let’s agree on one thing: implementing ITIL (or any IT service management) isn’t a button you press. It’s a restaurant kitchen at rush hour: great people, great recipes… and the dish can still come out cold if the orchestra isn’t in tune.
The goal here is simple: a straight-talk guide to spot where projects stumble and how to get them back on track.
The usual culprits:
- Lack of team involvement — “ivory-tower” processes don’t survive the first incident.
- Insufficient planning — without a map, backlog, and “done” criteria, the team walks in circles.
- Ignoring the company’s real needs — copy-pasting your neighbor’s setup serves bland food (and zero value).
- Inadequate training — a tool without context becomes clicking; a practice without purpose becomes bureaucracy.
- Forgetting ongoing maintenance — go-live is the start; continual improvement is the game.
If at any point you think, “yep, that happens here,” perfect: mark that spot as your first step. And if you want quick support to level-set concepts, take a look at PMG Academy’s ITIL 4 Foundation course — a solid starting point to get the team speaking the same language and evolving consistently.
Before you start: the guiding principles that save projects
Here’s the truth: without a compass, any path looks like a shortcut. Use ITIL 4’s Guiding Principles as decision filters from start to finish: focus on value, start where you are, progress iteratively with feedback, collaborate and promote visibility, think and work holistically, keep it simple and practical, and optimize/automate when it makes sense. The official source lays these out clearly — worth checking PeopleCert’s overview of the ITIL foundations.
To help, here are a few plain-English translations of the jargon:
- Service catalog: the menu of what IT actually delivers, with lead times, quality, and how to request.
- Service Level Agreement (SLA): the formal time/quality commitment between IT and the internal customer.
- Knowledge base: short articles that teach people to solve issues without always depending on an analyst.
Zero-error (bonus): starting with the tool
Actually, not “zero” error — let’s call it what it is: a basic and, unfortunately, frequent mistake. 🙁 🙁
Classic symptom: the project kicks off by buying the tool, and only afterward someone asks, “which practices again?” Cue dramatic pause…
How to avoid it: define value objectives, map value streams (from request to delivery), and set minimum requirements by practice (incidents, requests, changes) before selecting the tool. When configuring, keep customization to a minimum and ship an MVP (minimum viable version) to validate quickly.
How to prove value? Check these indicators:
- % of requirements met with native features;
- reduction in Mean Time to Restore (MTTR) after changes;
- drop in “duct-tape” hacks and proprietary customizations.
See? Simple, basic, and it works. Try it and tell me how it goes.
PMG fast help: if you need the team aligned on a shared vocabulary before touching the tool, an introductory ITSM/ITIL 4 course saves a lot of back-and-forth — see options at PMG Academy.
1) Lack of team (and business) involvement
Without people, there’s no service management. Involving only IT creates blind spots: the folks who use the service need a voice from the start.
How to fix:
- Secure executive sponsorship and assign service owners (accountable for outcomes).
- Build the service catalog with business areas.
- Plan communications by persona (who needs to know what, when, and through which channel).
Signs it’s working:
- Self-service portal adoption (60%+ of requests start there);
- User satisfaction measured by service;
- Named, active service owners.
2) Insufficient planning
Planning isn’t a pretty Gantt. It’s wave-based road-mapping, clear “done” criteria, and a transparent risk/benefit plan.
How to fix:
- Create a plan by waves (MVP → expansion) with objective “done” criteria.
- Map the flow for incidents, requests, and changes: inputs, outputs, queues, and metrics.
- Tie the plan to value metrics: time to restore, change success rate, portal adoption, and user satisfaction.
Useful source: The DORA State of DevOps report describes four delivery-performance metrics (deployment frequency, lead time for changes, change failure rate, and time to restore) that make flow measurable (cloud.google.com/devops/state-of-devops).
3) Ignoring the company’s needs (and compliance)
Copying your neighbor fails because every company has different context, risk, and priorities.
How to fix:
- Translate business goals into SLOs/SLIs and realistic SLAs.
- Address legal requirements: data protection (e.g., GDPR/LGPD), audits, and standards like ISO/IEC 20000.
- For critical suppliers, set support contracts consistent with what you promise your customers.
Baseline reference: ITIL 4’s principles and practices provide a structure to align service to value, officially recognized by PeopleCert in the ITIL 4 guidance.
4) Inadequate role-based training and knowledge management
A “big bang” training the day before go-live doesn’t build competence — it builds panic.
How to fix:
- Role-based learning paths (e.g., major incident commander, problem manager, service desk).
- Knowledge-Centered Service (KCS): short articles created and improved from real interactions.
- Integrate with engineering: change evidence comes from the pipeline (testing, security, segregation of duties), not from a meeting.
Data point: Operational metric tracking keeps growing: 80% of service professionals reported monitoring first-contact resolution in 2024 (Salesforce State of Service — salesforce.com).
5) Forgetting operations and continual improvement
Go-live is the beginning. What sustains value is the cadence of measuring, learning, and adjusting.
How to fix:
- Keep a continual-improvement backlog with cadence (biweekly/monthly) and clear ownership.
- Run blameless post-incident reviews; capture causes and trends and turn them into preventive actions.
- Use observability and monitoring to cut noise and anticipate failures.
It’s working when: alert noise drops, repeat incidents decline, and preventive actions get finished — not turned into eternal tasks.
Building blocks that almost always go missing in the first wave
Quick context: these are “value leaks” that, if ignored, make the rest slip. Pick 1–2 to tackle now — not all at once.
- Major Incident: single command chain, clear comms, and a status page.
- Change, release, and deployment: a gated pipeline with segregation of duties; a change board only for exceptions.
- Configuration & assets (CMDB/SAM) in a “lite” version: small scope, data ownership, and automated reconciliation.
- Knowledge (KCS) integrated into the service desk and end-user portal.
- Requests modeled as reusable catalog items with lead times and simple automations.
- Suppliers with targets and integration aligned to what you promise internally.
Se você quer uma visão base para nivelar o time nesses blocos, a ITIL 4 Foundation da PMG Academy dá a moldura sem travar a prática.
Micro-checklist for the first wave
Before you use the list, a heads-up: the idea is traction and fast learning. This isn’t a test — it’s a guide to action.
- Service map with owners and service-level objectives/indicators.
- Catalog with the top 20 requests and incidents (clear how-to-request and lead times). Pareto’s 80/20 rule fits like a glove here!
- Tool configured to the minimum necessary (no “magic” customizations).
- Operations running: collaborative service desk, major incident, problem, and basic change/release.
- Active measurement: MTTR, change success rate, portal adoption, user satisfaction.
- Improvement backlog with goals and dates (biweekly/monthly).
If you’d like supporting material to turn this into a workshop, ping me and I’ll share a draft; meanwhile, check PMG Academy for learning paths that help sustain this rhythm without drama.
Proof-of-value metrics (tune targets by service)
Quick context: measuring well is the antidote to “the feeling of progress.” Choose 3–5 indicators that show perceived value and healthy flow.
- Mean Time to Restore (after incidents or changes).
- Lead time for changes and deployment frequency (inspired by DORA — see the official report at cloud.google.com/devops/state-of-devops).
- Change success rate (fewer rollbacks, fewer urgent fixes).
- Self-service portal adoption (requests and incidents initiated there).
- CMDB accuracy (critical CIs reconciled, no “orphans”).
- User satisfaction by service and persona.
Wrap-up
Implementing service management isn’t installing software. It’s aligning on value, streamlining flow, and measuring outcomes in a way the business understands. If you dodge the tool-first trap and handle the five points with short waves, clear links between practice and value, and a handful of useful metrics, operations breathe — and customers notice.
Spirit in one line: less friction, more predictability.
Liked the chat? Describe your scenario in two lines (team size, tool, biggest pains) and I’ll reply with a first-step draft you can try as soon as next week.
Categorias
Artigos Relacionados
Performance Evaluation in Service Management
“In service management, if you don’t measure, you guess; if you measure, you decide.” Introduction
Avoid the 5 most common mistakes in implementing ITIL 4 or any ITSM framework with these helpful tips and practices
Value first, then the tool — less friction, more predictability. Introduction Let’s agree on one