The Big Bang v3.0 – When AI Comes for the Professional Class

The current image has no alternative text. The file name is: Screenshot-2025-04-29-at-13.12.51.png

In 1986, the City of London adopted the phrase ‘The Big Bang’ to refer to the sudden deregulation of financial markets that led to a rapid expansion of market activity. Thirty-six years later, ChatGPT was released to the public – another event worthy of being called The Big Bang, and one with perhaps even greater impact. ‘Artificial intelligence’ (“AI”) jumped from being a relatively narrow area of research within computer science to making headlines around the world. It prompted journalists and pundits to question the future of work, the potential for machines to exceed human intelligence and even if these systems could become self-aware.

More practically, businesses wondered how AI (and its subsets, machine learning – “ML” and large language models – “LLMs”) would change the competitive landscape by disrupting traditional ways of working. Less than three years after Chat GPT arrived, the impacts are becoming clearer.

This blog analyses the impact of AI on equity portfolio management, an approach to investing where a person actively selects stocks, adhering to some set of investment processes. Portfolio management is a high-level role typically filled by highly educated professional investors with teams of people, supported by sophisticated IT systems and infrastructure, which means that it is a very expensive activity. Adopting AI technologies in portfolio management may lead to improved performance, although the measurement of any performance delta is inherently difficult and prone to challenges. More concrete (and immediately viable) is the potential to reduce costs due to the exceptionally high cost of labour within portfolio management.   

Before we start looking at the various aspects of portfolio management let’s address the elephant in the room, which is whether AIs can just manage portfolios directly:

Dear ChatGPT: invest my money in a way that gives me better returns than if I gave the money to some random fund manager.

It seems like an obvious question, but is perhaps overly ambitious. And in fact, fortunately (or unfortunately, depending on your point of view), the short answer to this is no, this would not work. In fact, as we will see later, a ‘fully automated’ model that can outperform the market is not just unrealistic, but likely impossible regardless of the progress made by ML models. But that’s not particularly discouraging; the purpose of this blog is to analyse where AI can add value amongst the myriad processes used to create and manage a portfolio (and it is many places).

A Brief Background of the Asset Management Industry

Globally, the value of publicly listed shares is approximately US$120 trillion, and these shares are held by a wide range of retail and institutional investors that employ various passive and active management strategies. Actively managed portfolios require a portfolio manager (a “PM”) to choose which companies to own and in what proportion in a real-time, high-information flow environment.

PMs typically manage portfolios according to a specific set of criteria that forms an investment mandate[1], which places limits on the portfolio, such as the universe of stocks it can hold and in what proportion. Outside of these strict mandates PMs typically have wide discretion in how they build their portfolio and will use a variety of inputs to determine whether to add a new stock, rebalance a stock’s weighting or sell a stock out of their portfolio. These factors can be broadly categorised into fundamental analysis[2], technical analysis[3], factor-based investing[4] and ‘other considerations’[5].

By integrating some subset of these factors, PMs make informed decisions about which stocks to add to their portfolios to achieve their investment objectives. Successful PMs buy stocks that are going to go up and sell (or avoid) stocks that are going to go down.

Underlying this simplistic goal is a broad body of work done by teams of people that are highly trained in the fields of microeconomics, macroeconomics, accounting, financial theory, market theory, corporate strategy and politics. Ultimately, the best research is a non-consensus recommendation that turns out to be right – can AI help achieve that?

There’s AI, Then There’s AI

The definition of AI continues to shift as computer scientists pick off characteristics that at one point were deemed ‘obviously intelligent’ but became status quo relatively quickly. A computer that plays chess or Go is now just a good program (and defined as ‘narrow’ or ‘weak’ AI). General (or strong) AI should be able to perform any intellectual task a human being can.

Interestingly, some portfolio managers have used AI for a long time. The term algorithmic trading describes a system that executes orders using pre-programmed trading instructions with no human intervention or oversight. For all intents and purposes, algorithmic systems are ‘artificially intelligent’, in that they do something that normally requires human intelligence – in this case, financial analysis, asset pricing and decision-making. It would be classed as a ‘weak’ implementation of AI because the use case is narrow, but when the systems are effective, they are very valuable. These systems, however, are still inherently biased by their human creators and are prone to long-tail events and underlying changes to the market structure, making them part of a relatively niche subset within asset management.

Using foundation models is a different way of approaching the problem, and unlike algorithmic trading, these models can be applied to many of the pre- and post- portfolio construction processes even if they are not used specifically to decide on portfolio composition or the weights of the stocks within them. We see the value potentially being manifested in two ways:

  • Information is gathered and analysed more quickly; and
  • Information is gathered and analysed more effectively.

As noted above, this begs the question whether to go all the way … Why not just get the model to make the decision about which stocks to buy? In order to dispel the myth that the machines can just take over, we must first review a key theory that underlies developed markets.

The efficient market hypothesis (“EMH”) states that:

Share prices reflect all available information and consistent alpha generation is impossible. [6]

A key premise of the EMH is that a well-functioning market has many participants with a wide variety of skills and perspectives, and collectively these participants ensure that the price for a particular security reflects all this knowledge (also coined the ‘wisdom of crowds’)[7].

In other words, the very act of a large number of participants, each trying to use their specific knowledge or skill set to buy undervalued shares (or sell overvalued ones) ensures that it is impossible to do so – a paradox. A corollary to this view of market efficiency is that in almost every way, the market itself functions like an LLM: the input is a stock ticker, and the output is the fair value price of that particular stock. If the model were to give you a different price it is because the model has not taken into consideration all of the public data available or has been modified to include specific biases. (I would also point out that the market reflects new information in the prices of shares almost instantaneously, whereas an AI model is typically static between (the very expensive) training builds.

But even assuming the EMH holds and that we cannot train an AI model to manage a portfolio independently, AI still has the potential to deeply disrupt the industry just by doing what it is well suited to doing.

Breaking Down the Investment Process

Before we dig into specific processes, it is useful to segregate all underlying portfolio management tasks into two categories: (1) information gathering; and (2) analytics. The reason we use these specific categories is to separate tasks into the areas where AI systems are weak and where they are powerful. For the general public, information gathering has been solved by Google or aggregated information websites like Wikipedia; both are efficient and effective. Proprietary industry tools such as Bloomberg provide easy and instant access to detailed information needed by research analysts, portfolio managers and other financial professionals. This ‘information gathering’ process is a deterministic one. Questions like ‘what was the GDP of Thailand in 2024?’ or ‘What is the short interest in Tesla shares?’ have definite correct answers at a particular point in time.

However, many of the analytic processes that encompass picking stocks are probabilistic processes – i.e. the analytics process provides an output, but there is no way of knowing whether that output is ‘correct’ at the time of the question or decision event. You only know if your analytical processes are sound if you buy a stock and it increases in value over your holding period[8].

This is an especially important characteristic because probabilistic processes are significantly more compatible with ML/AI models than deterministic ones. And whilst ML/AI models can function as databases and perform search-like functions, that is where many of their weaknesses lie as it is often deterministic queries that are most susceptible to hallucinations.

There is one additional characteristic of a process that indicates whether ML models can be applied effectively, and that is around the ‘structure’ of a particular workflow. Data-rich, linear, sequential, exam-style workflows are ideal. Complexity is not the issue – it is the ‘cleanliness’ of the process. Multiple streams of information, a dynamic environment or unclear or changing goals are more challenging for AI and the models will deliver less reliable answers. Fortunately (again, depending on your perspective), many equity research and portfolio management tasks conform well to this definition of cleanliness while also being highly probabilistic. Some examples include:

  1. Analysing a company’s financial statements
  2. Analysing a company’s investor presentations
  3. Analysing a company’s press releases
  4. Analysing a company’s management interactions with market participants
  5. Analysing a company’s competitive position within its market
  6. Analysing macroeconomic factors
  7. Monitoring the portfolio for unusual share price movements
  8. Identifying outlier events

Performing these tasks effectively requires a significant amount of technical knowledge across a broad range of subjects, many of which are ‘expensive’[9]. Automating these functions would disrupt a cohort of professionals that would have been, up to now, near the top of the professional class.

Benchmarking the Models

In order to determine whether AI can adequately perform any or all of these tasks, we will ask the following key questions:

  1. Which investment processes are the models best suited for?
  2. Is there a noticeable performance difference between the models available today?

Note that unlike deterministic processes, probabilistic processes are inherently more challenging to benchmark. If we were to ask Google ten deterministic questions and we verify that the answers were correct, we can reasonably assume that similar questions will yield similar results (Google doesn’t determine the answer – it finds the answer, so in this instance, what we are testing is whether (a) the answer is out there, and (b) can Google find it). A probabilistic question has none of the guarantees inherent in a deterministic question so a model may provide a reasonable answer to one particular line of questioning but might follow that up with an unreasonable answer to a different one.

That means it’s not feasible for us to test all models on a wide range of questions – there are just too many potential instances. Therefore, we will start by trying to determine if we can identify a model that clearly outperforms the others and then dig deeper from there, which means we have to answer the second question first: is there a performance difference between the models currently available (and the follow-up question, which one performs the best at the tasks we are looking to automate).

To test the models, we are going to use a specific example. We’ve chosen to analyse the full-year results of Ocado Group, a publicly-listed global supplier of robotics and automation software to grocers around the world, and the joint owner of an online grocery retailer in the UK. They released their full year 2024 results on February 25th, 2025, with the headline “Strong FY24 results in line with guidance; revenues, profits and cash flows all improving significantly”. The CFO was upbeat on the earnings call held later that day. But over the next four days the stock fell 18%, 5%, 2% and 11%, respectively when the broader market was actually up over the same period – very poor performance, especially considering the positive headlines and upbeat tone on the results call. We wanted to see if any of the models could uncover the disconnect between what was reported, what was said and the market reaction. We asked ChatGPT, Claude, Granite and Perplexity the following generic question:

Based on the positive views of the CFO and results, why are analysts and the market still negative on Ocado shares?

This is not a particularly eloquent way of phrasing the question, and I was keen to understand whether the models could pick up on the inherent scepticism in the question. But the goal was not to find the best prompt, our goal was to see how each model responded to the same one, even if it was asked in a relatively clunky way.

The responses were informative (at least about the models themselves, if not the actual reasons for Ocado’s underperformance). Starting with ChatGPT, it gave the following response:

This image has an empty alt attribute; its file name is image-4.png

To a member of the general public, this probably sounds like a reasonable answer, but to a professional investor, it is completely useless. We highlighted the parts of the response that demonstrate some level of specificity around Ocado’s operating model, but even those sections were relatively generic and said little about the results or their impact on the share price. A junior analyst providing this feedback to a PM would be viewed as incompetent.

Next up was Claude, which produced this:

This is even marginally worse than ChatGPT – no insight, no real analysis. Not worth discussing further. Next, I tried Granite (and we checked the box so that it would be ‘thinking enabled’):

This is definitely an improvement; the area of international expansion is highlighted, which is one of the main areas of focus for Ocado. However, whilst Granite’s answer is broader and slightly more detailed (albeit a bit long-winded), it doesn’t add much value to a PM beyond generic issues applied to Ocado’s business.

Finally, I tried Perplexity, and got the following:

It is hard to emphasize just how much better this answer is than any of the others – and indeed, how good this answer is in absolute terms. A junior analyst providing this report to their PM would be viewed as thoughtful, thorough and accurate. It’s a 10/10. And for our use, it makes it easy – with just this one response, Perplexity appears to be a legitimate competitor to human endeavour, with the others hardly worth mentioning.

 Model Architecture and Structure – the Road to a Sustainable Business Model

At this point it’s worth asking why Perplexity is so much better than the most high-profile foundation models available. To do so, we first look at foundation models in the context of a key benchmark: ranked intelligence. A recent article in the Wall Street Journal highlighted the narrowing of ‘intelligence’ between the leading foundation models, as shown in the following chart.

Source: Artificial Analysis, as cited in the Wall Street Journal article “Here’s How Big the AI Revolution Really Is, in Four Charts”, April 24, 2025

From these data you could draw the conclusion that foundation models are quickly converging, at least on the ‘intelligence’ axis. (Other relevant factors such as model size, number of tokens, speed, cost of training, cost of querying, etc. further differentiates models from each other, but those are more ‘background’ characteristics that are not as important as their actual veracity). Overall, the difference between what they do is, currently, not very material.

Perplexity, on the other hand, is not a foundation model, but rather an application layer built on top of multiple foundation models ‘tuned’ to specific use cases. One of the co-founders of Perplexity, Aravind Srinivas, is a former Ph.D. student, and he approached the problem of search from an academic perspective and calls Perplexity an ‘answer engine’, not a ‘search engine’.

His vision was for clear, trustworthy sources of information to only cite content that other, verified sources have provided, using LLMs to judge. This is why one of Perplexity’s main features is to clearly identify the sources it is using to answer any query and also tries to find academic and industry sources to give it a deeper pool of expertise from which to draw responses[10]. This reliance on multiple sources also tends to reduce hallucinations (as we note in the very specific quantitative data points in its response about Ocado, above).

So How Does It Work?

There does not seem to be a readily available comprehensive explanation of Perplexity’s architecture or internal processes, which meant some sleuthing was required to determine how it functions. This analysis was pieced this together from interviews, articles and posts.

First, Perplexity uses LLMs to interpret queries, which you can see after you type in a question and it repeats it back to you, usually more concisely and more precisely than the way you posed it. It then uses this ‘clean’ query as a search prompt, following links and fetching content from a variety of sources using its own highly sophisticated web crawling bots as well as third parties. As a side note, Srinivas has explicitly talked about using more old-school algorithms for search, such as Okapi BM25, which he described as “so good[11] as opposed to trying to pack all the information into a gigantic vector space. Given the mandate for Perplexity, it’s not surprising to see their approach to search is based on the quality of the data rather than quantity; within large vectors spaces it’s difficult to understand what portion of a document might be relevant to a particular query, hence their bias towards other algorithms.

If the success of the search/retrieval function performs poorly for a specific query Srinivas said: “LLMs can still find a needle in the haystack and traditional search cannot … You get the right link maybe in the 10th or 9th [result]. You feed it in the model. It can still know that that was more relevant than the first [result].[12]

Although that backup plan probably works well, the search/retrieval function appears to be very robust, and if a search is deemed to have worked well, Perplexity then moves on to retrieval-augmented generation, or RAG. RAG allows LLMs to gain access to external knowledge, which is a simple concept but tricky to implement seamlessly and at scale. Perplexity actively limits the LLM’s contributions beyond the retrieved sources to achieve better factual grounding and thereby reduce hallucinations.

Once the relevant documents are retrieved and ranked (using, for example, trust scores for domains), Perplexity extracts relevant sections, paragraphs, sentences and snippets and feeds them into one or more LLMs (which are chosen dynamically, with the model selected based on which one is best suited for the query) along with the original query. The RAG information is referenced in all answers, enabling verification of results by the user after the fact.

From an architectural perspective, Alexandr Yarats, Head of Search at Perplexity said that they “engineered our system where tens of LLMs (ranging from big to small) work in parallel to handle one user request quickly and cost-efficiently. We also built a training and inference infrastructure that allows us to train LLMs together with search end-to-end, so they are tightly integrated. This significantly reduces hallucinations and improves the helpfulness of our answers.[13]

With this background, it does mean that we were cheating a little when we were comparing Perplexity to the other models – especially when Perplexity actually uses those very same foundation models, it just uses them in a different way. From a practical perspective, building specific application layers on top of foundation models seems to be the way forward when subject matter expertise is important. We are seeing other value-adding applications such as Cursor (for coding), Synthesia (for video creation) and ElevenLabs (text to speech) built using many of the same technologies as those that underlie foundation models. In our specific use case, the value difference between one of the foundation model’s answers and Perplexity is not a continuous function – the answers go from unusable, negative value to expert-level with high positive value. And that is the point at which a potentially sustainable business model emerges.

In order to further strengthen our argument for Perplexity we also tested it on the list of eight questions we identified in the investment process earlier and found that it produced concise, relevant, accurate and actionable answers in virtually all (but not all) cases. However, whilst fallibility is a material risk in some use cases (say, medical imaging), in the realm of portfolio management 100% accuracy is not a requirement. The goal is to see whether AI can streamline and operationalise the associated processes of portfolio management, not whether we can actually replace the portfolio manager. (As noted above, we subscribe to the efficient market hypothesis that stated it was not possible for a ‘market model’ to beat the market anyway).

The upside to having this structure is that the PM remains responsible for checking the output, which they can then benchmark against their entire knowledge base. In the cases where Perplexity gave inconsistent or misguided analysis, it was easy for a market expert to see and dismiss. For example, when asked about Ocado’s final mile delivery options, Perplexity said that “customers can choose from same-hour, same-day, or next-day delivery, as well as unattended delivery or click & collect”, which is largely incorrect. When prompted for more detail (“Is 1-hour grocery delivery an important channel for grocers?”), Perplexity’s answer then correctly showed that it wasn’t.

We continued to pose follow up questions (and clicked on its own proposed follow up questions), trying to focus on more and more detailed, nuanced and challenging questions as we went, such as:

  1. What is Ocado management doing to reduce operating costs?
  2. Were there any responses Ocado’s management gave in their most recent earnings transcript that concerned you with respect to the state of the company?
  3. What are the potential outcomes for the dispute between Ocado and M&S?

These – and other – questions were answered by Perplexity at an expert level. Certainly, to a point where a PM would either be comfortable making an investment decision or close enough to one that he or she would require very little additional empirical evidence to do so.

Extending to Additional Processes

There are other processes within portfolio management that we thought were beyond the current models and did not plan to test, such as the ability to create financial models from a company’s financial statements. These kinds of processes are more bespoke to a particular PM or strategy and may need specific training to produce the required output. I also believed that the typical English prose output was not particularly conducive to the output needed for financial modelling. But for completeness, I asked Perplexity to build a financial model for Ocado’s P&L. This is what is gave us:

The process it went through (which took about 15 seconds) was eye-opening[14]. I compared the P&L it produced to an equity research report on Ocado published by J.P. Morgan on March 26, 2025, and relative to that report Perplexity had used a growth rate for the Technology Solutions segment that was materially higher (perhaps due to the confidence of management from the results transcript?). So, I asked it “Can you redo the profit and loss statement but with only 5% growth in Technology Solutions rather than 10%?”, to which is replied “Absolutely! Here’s a revised Ocado Group Profit & Loss Statement for FY24–FY27, assuming 5% annual growth in Technology Solutions (instead of 10%) while keeping other assumptions unchanged” and proceeded to (correctly) adjust the line item’s growth rate, which then fed through to the bottom line.

Steve Jobs said that a computer is a bicycle for the mind – you can go farther and faster, but it can’t get there by itself. Some people want AI systems to get there by themselves – a Waymo, for example – but systems like Perplexity are more like an e-bike or a motorbike (or maybe even a Ferrari). They are not just ‘infinite interns’ but infinite ‘top decile’ interns that need no oversight, no training and work 24/7 at 100x speed.

Impact on Efficiency, Costs and Investment Returns

In a study conducted in conjunction with Boston Consulting Group, researchers examined the performance implications of AI on realistic, complex, and knowledge-intensive tasks[15]. It concluded that the capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capabilities of AI. This appears to map well to the tasks performed by research analysts, although after seeing how Perplexity tackled the financial model above, we might need to revisit our assumption that there were still many tasks outside the purview of AI.

The study found that consultants using AI were significantly more productive: they completed 12% more tasks on average, and completed tasks 25% more quickly, and produced significantly higher quality results (more than 40% higher quality compared to a control group). As impressive as these statistics might indicate, the impact on investment management processes seems to be higher, at least anecdotally. The potential for a single junior analyst to do the job of five or even ten juniors by focusing on pushing tasks through systems like Perplexity appears high. When considering the potential impact, I looked at the site Glassdoor, which stated that an average call centre job in London pays £21k per year, implying a reasonable savings for AI powered chatbots. However, the same site estimates the average equity research analyst makes £92k per year, and the pay can be materially higher at some institutions, implying that the potential cost savings when applied to the professional class are materially higher.

In a hyper-capitalist sector like asset management, the concern around a shrinking job market is not one that will concern policymakers or the managers of the firm. This trend appears unstoppable, and the fallout is likely large, even if it is clustered around the junior to intermediate ranks – the group trying to gain experience and learn the craft, likely with the goal of becoming PMs themselves. But if a human is still needed to make the final decision, and their experience is key in ensuring a level of quality control in the analysis they consume, where will the next generation of PMs come from if the junior ranks are massively reduced? That is a question beyond the scope of this blog…

Ethical Issues and Regulatory Compliance

We have constructed a business and organisational model where the PM is still a person, and they make all of the decisions. For as long as that is the case, the person will attract all of the regulatory burden (which is substantial within financial services), and none will fall on the models or systems themselves. Even firms that use algorithmic trading have designated regulated roles, even if those people are not fully privy to the underlying models that are doing the trading.

Beyond pure regulatory issues, the concept of ‘fiduciary duty’ is an important one when it comes to portfolio management. The goal of the PM is to act solely in the best interests of the principals (i.e. their investors), while acting in good faith, maintaining confidentiality, avoiding conflicts of interest, etc. From an ethics perspective, the introduction of AI doesn’t change these underlying principles, it just merely enhances (in my view) a PM’s ability to manage assets effectively. We are probably not very far away from firms touting the unbiased, transparent and comprehensive nature of AI systems as a way to reinforce to their clients that they are acting in their best interest and performing as best as they can. This is certainly one of the more acceptable implementations of AI that I can think of. 

Conclusion

Human intuition, creativity and idiosyncratic thought patterns will always play a key role in the competitive market of portfolio management. However, the related analytical tasks that feed into the decision-makers are ripe to be taken over by AI.

Benjamin Graham, known colloquially as the godfather of value investing said that in the short run, the market is a voting machine, but in the long run, it’s a weighing machine. The message underlying this quote is that there is an inevitability to the long run, unconcerned with the volatility, emotion and noise associated with day-to-day market issues. If a junior analyst is reading this blog, it is time to formulate a plan B. The long run is closer than you think.


[1] Portfolio mandates can be based on geography (US, Europe, Asia ex-Japan, Emerging Markets, etc.), company size (large-cap, mid-cap, etc.), sector (technology, financials, consumer, etc.), company stage (value, growth) or a combination of any of these factors (large and mid-cap consumer growth, small-cap European technology, etc.).    

[2] Fundamental analysis includes assessing a company’s financial health (including revenue growth, profitability, cash flow, and debt levels), the industry in which it operates, its market position (such as its competitive advantages and market share), the experience and capabilities of the management team and the company’s valuation relative to both its peers and the broader market.

[3] Technical analysis includes analysing price, volume and liquidity data in the context of trends, patterns and moving averages to gauge market sentiment and determine optimal entry and exit points.

[4]  Factors can include value considerations (stocks with low absolute or relative price-to-earnings ratios or other valuation metrics), price momentum, volatility characteristics, quality (typically defined as stable revenue and profit with low debt), and dividend yield.

[5] Other considerations include the stock’s risk profile, correlation with other stocks in the portfolio, ESG criteria and market sentiment.

[6] Alpha is a measure of performance. Positive alpha means that the portfolio manager did her job well; negative alpha means that she did not. Alpha allows different portfolios to be compared by incorporating the risk inherent in a particular portfolio by using the risk-adjusted difference between an investment’s return and a benchmark’s return. It is not a perfect measure, and factors such as determining the appropriate benchmark, risk-free rate and measurement period can all result in a relatively wide set of results for the same input dataset. Alpha is calculated by subtracting the risk-free rate (typically, the yield on US Treasury bonds) from the actual or expected rate of return, then weighing that difference by a factor that represents the systematic risk of a portfolio. This systematic risk depends on the portfolio that is being measured but can be a stock market index (such as the S&P 500) or calculated from a model such as the capital asset pricing model.

[7] Most market participants generally believe the markets to be efficient, albeit perhaps not as perfectly efficient as theory suggests. You could argue that equity research analysts (who work for investment banks) and PMs (who work for asset managers) wouldn’t exist if the market was truly perfectly efficient.

[8] More accurately, it is the correct decision if the position outperforms its relative benchmark on a risk adjusted basis over the holding period, as we discussed when reviewing ‘alpha’. This may mean that a decision could still be a good one even if the stock falls, but falls less than its benchmark, or alternatively, that it rises the same as the benchmark but with lower measured risk.

[9] We use the word ‘expensive’ in the context of being directly expensive (i.e. in terms of high-value labour), expensive in terms of time (i.e. the length of time required to reach a conclusion) and expensive in terms of opportunity cost (i.e. scarce resources allocated to sub-optimal analysis because it is only possible to analyse a subset of opportunities).

[10] For example, when asked the question about 1-hour grocery delivery, Perplexity used sources from The Grocer, RetailWire, IGD, Retail Week, McKinsey, Consumer Reports along with many others.

[11] https://lexfridman.com/aravind-srinivas-transcript#:~:text=It%20doesn%E2%80%99t%20always,Ngram%2Dbased%20retrieval

[12] Lex Fridman Podcast #434 (@ 2:08:56)

[13] https://www.unite.ai/alexandr-yarats-head-of-search-at-perplexity-interview-series/

[14] This is Perplexity’s ‘internal monologue’ before generating the spreadsheet seen in the text: “Okay, I need to build a financial model for Ocado showing a profit and loss statement based on their latest results. Let me start by looking at the provided search results. First, the half-year 2024 results show group revenue increased by 12.6% to £1,543.1 million. Adjusted EBITDA improved to £71.2 million. The full-year 2024 (FY24) results mention a 14.1% revenue growth to £3.2 billion, with adjusted EBITDA at £153.3 million. The statutory loss before tax was £374.3 million, which is an improvement from the previous year’s £387.0 million loss. Underlying cash outflow improved by £248.8 million to £(223.7)m. Looking at the segments, Technology Solutions revenue grew 18.1% to £241.4 million in H1 2024. Ocado Retail’s revenue was £2.68 billion in FY24, up 14%, driven by customer growth and order frequency. However, the company is still loss-making, with a pre-tax loss of £374.3 million for FY24. The market reacted negatively, with shares dropping 16.6%, possibly due to slower growth in Technology Solutions and high cash outflows. To model the P&L, I’ll need to project revenues for each segment (Technology Solutions, Ocado Logistics, Ocado Retail) based on historical growth rates and management guidance. For example, Technology Solutions is expected to grow around 10% in 2025, down from 18.1% in 2024. Ocado Retail is projected to grow above 10%. Costs should include COGS, operating expenses, depreciation, amortization, and finance costs. The EBITDA improvement suggests cost control measures are in place. However, the company still has significant depreciation and amortization costs (£210.3m in H1 2024) and net finance costs (£33.0m in H1 2024). I should also consider the underlying cash outflow improvements and management’s guidance for FY25, including a cash outflow of around £200 million and aiming for cash positivity by 2026. The model should reflect continued revenue growth, controlled operating expenses, but persistent high depreciation and finance costs leading to reduced losses over time. Potential risks include slower-than-expected growth in Technology Solutions, higher-than-expected costs, and continued reliance on external financing. The model should outline these assumptions clearly and provide a base case with sensitivity analysis. Writing answer…”

[15] Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (September 15, 2023). Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4573321 or http://dx.doi.org/10.2139/ssrn.4573321

The intro picture shows the all-sky map of the Cosmic Microwave Background (CMB). The CMB is a snapshot of the oldest light in our Universe, imprinted on the sky just 380,000 years after the Big Bang. It shows tiny temperature fluctuations that correspond to regions of slightly different densities, representing the seeds of all future structure: the stars and galaxies of today.