“Hey ChatGPT, how do I word this email diplomatically?”
Twenty minutes later, after a dozen rewrites that somehow sound more passive-aggressive than polite, you’re pacing the room, wondering if HAL 9000 had better EQ.
Welcome to the era of AI Rage — the digital-age cousin of road rage. And lurking in its shadow? A potential new stress disorder: AI PTSD.
No, this isn’t satire. While “AI PTSD” isn’t a formal diagnosis, the emotional toll of working with AI tools like ChatGPT, Claude, Gemini, and others is becoming harder to ignore. We’re not just using these tools. We’re interacting with them — often intensely, repeatedly, and with high expectations. And that’s where things get complicated.
The Rise of AI Rage
Imagine you’re racing toward a deadline and your AI assistant decides that “banana” is the right answer to a question about your cloud security posture. It’s funny — until it’s not.
The Financial Times recently reported that overreliance on AI tools in the workplace is beginning to impact mental health. Information overload, reduced social interaction, and increasing dependency on tools that don’t always deliver — it’s a perfect storm for digital burnout.
Emotional AI — Or Emotional Minefield?
A recent article in PsyPost explored how some people are turning to AI for emotional support. And sometimes, yes, it helps. But other times, you get a robotic “I don’t have feelings, but I’m here for you” — which can feel more isolating than comforting.
Meanwhile, The Guardian raised valid concerns about so-called “emotional AI,” noting that detecting and responding to human emotion is far more complex (and prone to bias) than we’d like to believe.
AI PTSD: Science Fiction or Emerging Reality?
Let’s be clear — we’re not talking about trauma in the traditional sense. But repeated failures, unmet expectations, and a sense of emotional mismatch with our digital tools can slowly erode trust. And that erosion leaves a mark.
We anthropomorphize everything — pets, cars, even smart fridges. So when the “smartest thing in the room” gives you the wrong answer for the tenth time in a row, it feels personal. That feeling adds up.
So What Can We Do?
1. Treat AI like an intern, not a genius. Fast and promising, yes — but not infallible.
2. Rant to real people. Venting to a chatbot is like shouting into a canyon. It echoes, but it doesn’t help.
3. Know when to walk away. If you’re tempted to argue with a bot, it’s time for a stretch break.
4. Design better tools. Transparent systems, smarter defaults, and less anthropomorphic fluff would go a long way.
AI is changing everything — how we work, how we think, even how we relate. But if we’re going to coexist with these tools, we need to start being honest about how they make us feel, not just what they can do.
The future is artificially intelligent. But let’s make sure we stay emotionally intelligent along the way.
02 Nov 2019 | Thad Széll, UBS and Nuno Ferreira, Volterra
Software as a Service (SaaS) offerings are becoming increasingly prevalent across all industries as organizations look for ever more dynamic and flexible ways to leverage software while ensuring operational stability, cost transparency, dynamic scale and agility.
Before we get to the third-party provided application,
however, there are several components we need to have in place to enable our
users to gain access, such as: network connectivity, storage, compute (in the
infra Layer), virtualization, operating system, middleware, runtime and all
that’s required to allow the application to run within the services layer. Only
then are you into the realm of RBAC configuration, user management and
distributing of the application to your end users.
It’s also important to note that while many of the above
layers are seeing a consolidation and overlap in underlying technology within
the function, organizations usually have different teams with unique processes
that own their specific layer. More often than not, there are multiple teams operating
at each layer.
This article lays out the ideas and innovations that Thad Széll
(Distinguished Engineer at UBS) and Nuno Ferreira (Field CTO at Volterra) have
to allow organisations to adopt SaaS applications in a real time, secure and
private way without compromising and tainting the existing environment. Their
focus started with Microsoft Office 365.
As we all know, SaaS applications are generally accessed via
the public Internet and it is here that the first set of problems arise. The
first infrastructure issue is derived from the fact that previously the
application was on controlled, mainly static infrastructure and now it is going
to be accessed through the Internet, which is extremely dynamic and not under
the control of one specific entity.
Some SaaS providers have the ability to provide dedicated
circuits (e.g. Azure offers ExpressRoute) or VPNs for organizations to connect
privately to them. To adopt such mechanisms, organizations will need to at least:
Peer (at the network level) with the
SaaS Provider and maintain routing relationships
Inject / advertise the SaaS provider’s
public IP addresses into the corporate network
Treat the service as an extranet or DMZ
and overlay levels of segmentation and security
In cases
where the SaaS provider offers only a VPN service to the customer, then a
device is required to build and maintain this VPN. Note that these technologies
don’t solve the latency issue.
Having a
forward proxy/NAT device can solve some security concerns but it will NOT solve
the routing problem and the need to inject third-party public IP addresses into
the corporate network.
Furthermore,
SaaS providers are often extremely dynamic and their advertised addresses, DNS
names and published end points change very regularly, and therefore operational
mechanisms need to be in place to maintain peering/injection/advertising and
security controls.
A solution
that allows for forward proxy / NAT without the need of routing is therefore more
elegant and necessary.
Let’s take Office 365 for example. Microsoft is expanding
its cloud footprint with Azure and Microsoft Application Services are expanding
every day. This platform allows large enterprises to have private, dedicated
and high-speed physical links to Azure called ExpressRoute.
Microsoft therefore could address the above-mentioned problems
(around privacy and latency) by allowing enterprise employees to access Office
365 via this dedicated private ExpressRoute circuit.
ExpressRoute for Office 365 provides an alternate routing
path to many Internet-facing Office 365 services. The architecture of
ExpressRoute for Office 365 is based on advertising the public IP prefixes of
Office 365 services that are already accessible over the Internet into your
provisioned ExpressRoute circuits for subsequent redistribution of those IP
prefixes into your network. With ExpressRoute, you effectively enable several
different routing paths for Office 365 services — the Internet and
ExpressRoute. This state of routing on your network may represent a significant
change to how your internal network topology is designed and secured.
Ok, so maybe not that private.
The below picture represents this scenario:
This approach presents three distinct challenges for network
and security teams:
Network
Routing and NAT
The enterprise network
Infrastructure team will be required to inject publicly routable IP space into
their corporate network to allow users to follow the preferred path (via
ExpressRoute). In addition, in order to prevent any exposure of the corporate
network to Microsoft, the network team will also be required to implement NAT
towards this Microsoft network.
This not only brings operational
complexity of maintaining BGP peering (along with NAT) with Microsoft but also
requires careful planning to accommodate for network complexities of having
routing available via both a dedicated circuit with routes injected into your
core network and the Internet.
Security
Change Management
Addresses from Office 365 change
from time to time, and as such these changes need to be reflected in internal
security and proxy infrastructure of the enterprise. Failing to do this can
result in intermittent or total loss of connectivity to Office 365 services
when the ExpressRoute circuit is enabled.
Microsoft provides a
comprehensive and regularly updated document (and REST API) that provides a
list of domains, IP addresses and TCP/UDP ports that need to be configured and
continuously updated on the enterprise security and routing appliances to
enforce corporate security and governance policies
Operational
and Security Visibility
There is little to no visibility
with NAT devices into the application.
So how did we approach this scenario when nothing appeared
to be good enough?
We worked out an elegant solution that removes the need for
managing complex network infrastructure and security policies while providing
employees with the benefits of ExpressRoute connectivity to access Office 365
and related application services.
Volterra’s integrated network and security stack includes an
application router with programmable proxy and load balancing capabilities to
address this need. Apart from these features, Volterra provides a simple
pathway for enterprises to evolve to zero trust security.
At a high level, the Enterprise will have a Volterra
Application Gateway cluster peering with the ExpressRoute router where
Microsoft presents the Office 365 routes/services. The Volterra Application
Gateway performs the automated discovery of the Office 365 endpoints via this
router, allowing enterprises to access Office 365 services from there.
This innovation allows the operators to remove the need to
manage yet another process to translate dynamic configuration to rules on their
infrastructure. The Volterra Gateway Auto-Discovery innovation permits the
clients to change the destination of their requests constantly and the gateway
will trigger auto discovery and TLS integrity for any request coming.
The second part of the solution is to expose these services
to corporate users without advertising the Microsoft public routes within the
enterprise network. This can be achieved in two ways:
The first method is to perform this directly
from the Volterra Application Gateway cluster in Azure over the ExpressRoute
circuit. Note that the only IP injection into the enterprise network will be
the IP address of the Volterra Application Gateway cluster in Azure. For
illustration purposes we are saying that the gateway is in Azure but it can be
located aAnywhere as long as these 2 conditions are met:
Users can access the gateway to send traffic to
it (or the gateway intercepts it)
The gateway is connected to an Express Route
circuit where Office 365 endpoints are located / discoverable.
The second method is to add one (or many)
Volterra Application Gateway clusters within the corporate premises (and
private DCs). We will then configure policies to expose discovered Office365
endpoint services on the on-premises Application Gateway cluster. These
endpoints are discovered via the Volterra Application Gateway that sits in
Azure Cloud (as explained in the first part). In addition, the enterprise
operations team can configure additional security and authentication policies
while encrypting all traffic.
The below pictures represent these scenarios:
The Volterra Application Gateway cluster also implements auto-scaling
features, meaning that when one gateway crosses a specific threshold it will
spin another one and add it to the cluster.
Other features of the Volterra solution that provide
significant benefits to the enterprise network, security and operations teams
are:
Operational
simplicity with centralized policy and SaaS-based management
Integrated/unified
proxy, security and routing with programmable data plane
Granular
and rich visibility, logging, and metrics of application usage and access
By achieving simplicity and operational excellence, we
believe this solution is the real answer to organizations trying to achieve
similar goals as UBS, where we can ensure that private access to SaaS services,
such as Office 365, are used without the need to make your corporate network “available
to, or even part of, public networks.”
Who is Volterra ?
Volterra is an innovative startup that provides a
distributed cloud platform to deploy, connect, secure and operate applications
and data across multi-cloud and edge sites. Enterprises benefit from greater
innovation, faster time-to-service, and simplified operations.
Investors & Board Members: Microsoft M12,
Microsoft’s venture fund (Nagraj Kashyap), Mayfield (Navin Chaddha), Khosla
Ventures (Vinod Khosla), and Samsung NEXT.
Anyone
who has worked in IT, particularly in the Financial sector, will appreciate the
revolving door of wholesale out-sourcing, in-sourcing, strategic outsourcing
and one hundred other flavours in between. Well my question is. “What is the
long term forecast….. can we expect clouds?”
The
simple answer is yes, they’re here and they’re here to stay. But wait a minute,
before you exit all your prime data centre real estate and sell them to the
cloud providers, is the outlook completely overcast?
Now,
I’m not talking SaaS services, I’m talking using a cloud provider to host all
or large portions of your applications, storage, compute…. outsourcing your
infrastructure and application hosting. Furthermore, it’s important to state
that there is definitely a massive advantage in doing this, not only will it
bring a level of application understanding we have all been crying out for for
years but it should also deliver cost transparency, dynamic east/west
scalability and massive operational visibility, automation and therefore
uptime.
But…..
once we have refactored our apps, built our intent driven, cloud ops SRE
function, delivered zero touch provisioning full end to end automation….. and
had time to fall off our chairs at the fees and complexity of the bills. It’s
my firm belief that we will be looking to move all or part of our estate back
into our own data centres.
The
key to this successful re-Insource, at some point in the future, will be
preparation. Whilst our application developers, program managers and Cloud
SRE’s are busy moving to the cloud, we should, in parallel learn everything we
can from the cloud builders to design and deliver a safe landing zone for the
inevitable repatriation that will come when cost is revisited.
In
essence, let’s use the Cloud to rewrite our applications, understand our costs
and refresh our infrastructure, with best practices learnt and homed in the
cloud, brought back on-prem, delivered at lower cost.
Therefore,
vendors who build cloud migration capabilities will be king. Enterprises that
plan ahead can use the time we have whilst migrating to the cloud to rebuild
our foundations, settle our technical debt and develop the right entry port to
effectively deal with the inevitable migration future financial austerity will
most certainly bring.
Firms
are already republishing their Cloud strategies to draw focus away from cost
saves with a renewed emphasis on the undisputed benefits of cost transparency,
scale and agility that the cloud offers.
It
is my opinion that the long term (3 to 5 year) forecast will be one of
substantial cloud adoption. This is certainly driving us towards developing better
applications that are agile and flexible, but the biggest benefit will be one
of understanding the true cost of an application against the profit or benefit
earned from it.
However,
we will be returning to on premise hosting and the smart money will be on those
organisations who can plan an on premise strategy that will enable and support
the repatriation of applications back on premise whilst offering comparable
flexibility, scale and cost transparency as enjoyed within the cloud. Ultimately
blurring the lines between on and off prem services but consisting of a
suitable mix of the two.