Vi er best i test.
Skreddersydde løsninger og våre kunders digitale reiser.
Oppdag våre innsikter og ressurser.
Utforsk dine karrieremuligheter.
Lær mer om Sogeti.
Begynn å skrive inn nøkkelord for å søke på nettstedet. Trykk på Enter for å sende inn.
Generativ AI
Cloud
Testing
Kunstig intelligens
Sikkerhet
February 11, 2026
A few weeks ago, I was staying overnight at one of the top hospitals in the country while a loved one was receiving care. It was one of those quiet moments where you’re half awake, listening to the rhythmic beeping of medical devices when suddenly the IV infusion pump next to the bed started screaming an alert. Within seconds, a nurse rushed in, puzzled as she stared at the device. Then she said something you never want to hear in a hospital room at 2 a.m.
“It looks like someone else is trying to log into the device remotely.”
Another nurse joined her. They exchanged that unmistakable this shouldn’t be happening look, then unplugged the device completely, rebooted it, and re‑entered all the patient information manually. After a few tense minutes, everything was “back to normal.” The room went quiet again.
But I didn’t. Unexpected failure is one thing. Unexpected failure in a clinical environment is another. Watching that unfold in real time made me think deeply about the fragility of mission‑critical systems, the security vulnerabilities we don’t always see, and the profound consequences when something goes wrong in the field.
And it struck me: It’s not that these devices lack data, diagnostics, or connectivity. It’s that across industries, we often fail to turn real‑world signals into reliable, resilient learning loops. That, to me, is the real beginning of “shifting quality right.” Not more telemetry. Not more dashboards. But more learning – faster, deeper, and closer to where the real world happens.
What I saw in that hospital room is echoed across industries in this year’s World Quality Report. Nearly every organization analyzes production data, yet almost half struggle to apply those insights to actually improve quality (94% analyze; 45% struggle to apply).
It’s not the lack of information. It’s the lack of integration.
We see this again in SRE adoption. Many organizations champion reliability engineering, but the maturity levels according to the WQR tell a different story:
And when we examine resilience practices like chaos testing (the kinds of drills that prevent those 2 a.m. medical‑device moments) the numbers are even smaller. Only about 4% of organizations plan to introduce chaos engineering in the next 24 months.
The pattern is consistent across industries, and it mirrors what I witnessed in that hospital room. Shift‑right isn’t struggling because the technology isn’t ready (in fact, the tooling has never been more capable). It’s simply that our quality culture is still catching up, and as it matures, organizations are beginning to see just how powerful and transformative shift‑right can truly be. The promise is already there; the opportunity now lies in strengthening the habits, ownership, and learning loops that convert data into real‑world resilience.
So where should teams begin? The organizations that succeed with shift‑right across industries show a consistent pattern. Their focus is not on tools or frameworks but on culture. They do a few foundational things consistently well. So here are some practical ways of how you can build a powerful Shift-Right culture in your Quality Engineering setup.
A short ritual (as little as thirty minutes once a week) with focus on one topic: Telemetry → Test updates → Engineering action.
No slide decks. No reports. Just teams looking at what’s happening in production and making immediate decisions:
The WQR highlights this gap: although 94% of organizations collect telemetry, only 13% use it proactively to raise quality upstream.
The ritual forces learning to happen. Every week. Without fail.
SRE is powerful, but only when reliability becomes a team sport. Error budgets, Rollback criteria, Production quality gates all work best when owned collectively, not by a siloed team.
The uneven maturity of SRE practices in the WQR reflects exactly this challenge.
AI can accelerate anomaly detection, cluster production failures, generate regression tests from real user flows, and recommend test-case optimizations.
But it cannot replace judgment. Especially when human safety or customer trust is involved. The WQR calls out that while many organizations adopt AI‑enabled monitoring tools, governance and integration remain shallow.
AI helps us see faster. Humans help us understand better. Shift‑right lives exactly at that intersection.
Once this discipline takes hold, results show up fast. Often, dramatically so.
Shift‑right doesn’t just change testing. It changes trust.
If your organization, like most, is drowning in data but starving for insight, you don’t need another monitoring solution or a dramatic platform overhaul.
What you need is a learning loop that connects production reality to engineering decisions.Start with that one weekly ritual.One shared metric.One AI‑assisted insight that informs next week’s tests.Then keep iterating.
For a deeper view into the patterns, maturity gaps, and opportunities shaping this shift across industries, I strongly recommend diving into the World Quality Report and holding up your roadmap against the global benchmarks.
Because quality isn’t measured by how few issues escape. It’s measured by how fast we learn — and how committed we are to applying that learning where it matters most.