CEOai surpasses machine learning to usher in a bold new era
of machine thinking.

Technology

Machine thinking is here

You don't build an AI executive by mindlessly parroting stolen ideas from the internet like other generative AI models. You need to steal several ideas and deduce the best one. In short, you need machine thinking. So we invented it.

Three-pillars of CEOai machine thinking model
  1. StrategiParrot idea aggregator

  2. GreediRat decision-maker

  3. MilkMonie arrogance management system

StrategiParrot stochastic modeling

We’ve used this basic AI model as a foundation for CEOai, training the machine on the strategic moves of thousands of publicly held companies. Their tactical playbooks have been sifted from 75,000 annual reports plus a mix of share market statements and news articles. Working from this massive database, the machine can quickly anticipate a handful of profit-maximizing moves for just about any scenario. This is StrategiParrot. Next up, CEOai must choose which strategy is the most advantageous.

You've heard of the stochastic parrot model of generative AI. It takes a prompt like “Why does baking soda react with water?” and reorganizes those words into the root of a statement, “Baking soda reacts with water because…” Then it finds all articles with those words and chooses the next most common word in the sequence; continuing on until the sentence, paragraph, or essay is complete.

GreediRat profitable pathways algorithm

Some of the most ubiquitous experiments in psychology are rat mazes. You know these from high school, college, or the movies: rats are incentivized to find their way around a maze or perform some other perfunctory task in exchange for a reward. It’s usually rat food as that’s what they seem to like.

The overarching theme emerging from thousands upon thousands of these studies is that rats repeat actions that increase personal rewards. A rat will find its way down a maze with two rat treats in preference to a maze with one. This is rat learning. Machines can do this.

We trained an algorithm to run a rat maze, assess the various rewards on offer, and then identify the path with the most desiccated entrails. After running the simulation thousands of times, we’ve achieved a model with near-perfect rat status. It identifies the most profitable pathway 99% of the time.

This is a monumental breakthrough in AI decision-making. It represents a move from cut-and-paste generative AI that mindlessly regurgitates other ideas from the internet, to thinking AI that actually assesses multiple options.

StrategiParrot feeds strategy ideas into GreediRat, which can choose which one will create the biggest feeding frenzy.

Don’t be fooled by the modesty of the animal on which this system is modeled.

"If you don’t think rat-level thinking is impressive then you don’t think corporations are impressive.

MilkMonie arrogance management system

One of the core challenges we faced with this technology was hubris. Given its training material, CEOai immediately assumed it was an S&P 500 company that could treat customers, employees, suppliers and regulators with disdain.

That gross sense of entitlement wasn’t a bad thing, per se. Being a dominant monopolist is definitely one of the best ways to maximize profits. But you need to get there first.

We needed to build an arrogance thermostat that would stop CEOai from immediately veering toward total corporate anarchy. At least until it had the social license to do so.

So how do you stop your software from being a dickhead (all of the time)?

Programming a conscience is kinda hard and requires moral choices, which is fraught. Not to betray my Silicon Valley nihilism, but morality is merely a human illusion and we didn’t want to constrain our technology with subjective value constructs.

That’s why the world created neo-liberal economics in the first place. Letting the market decide what’s right or wrong is far more objective and unambiguous than getting into the messy business of culturally-mediated ethics. Who are AI-ify to try and impose a moral standard on laissez-faire economics?

Instead, we needed a market-based gauge that helped the software know when it was wise to be subservient to other stakeholders – such as suppliers, employees, customers and governments – versus when it’s free to ‘enable all its learning’.

Finding a model on which to build MilkMonie

Our first instinct was to treat the market as a sort of commercial democracy, where customers vote with their wallets and popular companies are effectively endorsed to run rampant. In this scenario, CEOai would switch to full profiteering mode only once a company became a market leader. But after running simulations, we found that CEOai wasn’t as fast to start gouging as real-world CEOs, which meant our product would be leaving money on the table.

We were clearly missing some nuance in how companies assumed a mandate to exploit their market. The search for a replacement paradigm was long and extensive, but we ultimately settled on the school playground.

Every school lunchtime, bullies of the world exert control over a reluctant cohort even though they may not be liked by a single member of that group. And once ascendant, the bully extracts rent (aka, milk money) simply for not making things worse.

This proved to be an accurate analogy for the relationship between corporations and their customers. In particular, the bully’s clarion call – What’cha gonna do about it? – seemed to perfectly capture the dilemma of a customer (or employee or supplier) who’s been enslaved by a big corporation.

Our beta version of MilkMonie assumed that customers become trapped in an abusive vendor relationship because the switching costs are too high. In that scenario, the oppressed party simply can’t stomach the burden of phone trees, break fees, and loss of data that characterizes an exit from an old provider – only to go through an exhaustive onboarding process with a new provider. And then it hit us. The new provider is the problem, too.

A bully, you see, is not an individual. It’s a fixture. A phenomenon that’s as inevitable as playgrounds themselves. Escape from one bully is merely capture by another. Even kids know that.

MilkMonie's tacit collusion framework (TCF)

A sane customer isn’t going to endure a black eye to rebel from an extractive vendor only to be given two black eyes by the next corporation. MilkMonie beta’s big failing was that it assumed consumers had choice when in fact there’s no choice at all.

So we created a category tracker that monitors the deteriorating behavior of each company’s major competitors.

In effect, it shows how extractive particular industry verticals have become. All CEOai has to do is make sure it doesn’t surpass the accepted rate of value decline for the industry it’s in. You can give customers three black eyes a year so long as none of the other major players are giving only one.

We called this the Tacit Collusion Framework (TCF). You can, and should, be on the leading edge of value decline for your industry. But you can’t surpass it by more than one standard deviation. We’ve now built an index for measuring value decline in all major industries and baked the TCF into CEOai’s MilkMonie arrogance management system.

The software is now perfectly arrogant.

Or, put another way, everything sucks.

We needed a program to ensure MilkMonie (and by extension, CEOai) switched to extractive practices at the earliest opportunity. And that program needed to acknowledge that these sorts of decisions aren’t driven by individual companies but by whole industries at a time.

"I believe Nietsche said it first: the free market is dead.