Hello, I Azeem Azhar. As a global expert on exponential technologies, I advise governments, the world’s largest companies, and investors on how to understand the exponential future. Every Sunday, in this newsletter I’ll share my views on developments that I think you should know about.
🎨 Thanks to our sponsors, masterpieceis an investment platform that allows ordinary people to invest in multi-million dollar paintings by artists such as Banksy and Basquiat.
In a study of biological reproducibility, 246 biologists who analyzed the same dataset very different results (h/t EV member Rafael Kaufman). Variation in these results resulted from diverse analytic decisions influenced by participant sample size choices and unique methodological backgrounds.
The results of reproducibility studies highlight the contradictions inherent in human analysis and scientific method choices. There has been a lot of talk recently about inconsistencies and bias in AI and LLM. But as we see, humans also exhibit such fluctuations. The central challenge lies in determining whether there is a way to combine human intuition with AI precision to increase consensus. Or at least to allow the paper’s authors to explore the data using a variety of methods and narratives beyond those chosen. Computational biologist Michael Eisen envisions a future where LLMs make it easier to present research results interactively.paper on demand” format:
I think it’s only a matter of time before we stop using a single narrative as an interface between people and the results of scientific research.
Doing so allows readers to try out and critique different methodologies, and hopefully reach a broader consensus. Although this vision seems too futuristic, current tools offer some solutions. for example, Elicit acts as an AI research assistant Quickly search and summarize research articles, reducing the time it takes to conduct meta-studies and similar investigations.
And this doesn’t just mean reaching a scientific consensus. In medical diagnosis, diagnostic agreement among physicians arises as an important issue. maybe it can be improved. We are already seeing improvements in the diagnostic accuracy of AI. Augmenting doctors with AI may help achieve higher diagnostic agreement. For example, a Mayo Clinic study highlighted the potential of AI in virtual primary care and revealed that a healthcare provider chooses one of his five diagnoses recommended by AI. became. 84.2% of cases. It is clear that AI has the potential to bring consistency, accuracy, and speed as both research and clinical settings address human diversity.
🎨 Today’s edition is supported by: masterpiece
Historic 36% growth with low risk? It’s possible in this market
It’s a market that most people have never considered investing in, but the average price has increased at an annual rate of 36% over the past 21 years. Additionally, the market’s Sharpe Ratio, which measures volatility-adjusted growth, was 1.5, compared to the S&P’s .49 over the same period.
So what is it?Paintings by world-famous artist Yoshitomo Nara – a market accessible to everyone masterpiece. This unique investment platform allows the general public to invest in quality art at a fraction of the cost.
Exponential View readers can skip the waiting list by: This dedicated link.
See disclosure information
Can cats become violent? Yann LeCun, Chief AI Scientist at Meta, recently said: F.T. It’s too early to do too much Concerned about the existential risks of AI. He warned of the pitfalls of hasty regulation, which could unwittingly strengthen the grip of existing tech giants, potentially stifling innovation and sidelining new entrants. Suggests. LeCun and Meta are passionate supporters of open source AI development. Drawing parallels to the past, LeCun highlights the transformative successes of open source platforms, including: Linux and Apache About the Internet ecosystem. Undoubtedly, new risks exist. For example, a recent report by the RAND Corporation suggests that: Current LLMs could help plan and execute biological attacks. However, claims about existential risk may be somewhat overstated at this point. As Yang aptly puts it, current AI is still “dumber than a cat.”
See also:
Will we be lazy when machines have our backs? Recent research on human-machine collaboration has revealed an interesting trend. Working with robots can reduce our attention span. When a participant was tasked with identifying defects on a circuit board, robot assistance detected fewer defects compared to his 4.2 defects identified when working alone. An average of 3.3 defects were found. This suggests that mental engagement may be reduced when humans and machines collaborate. This observation is not new. Just a few weeks ago we discussed research How AI is impacting BCG consultant performance. While AI improved efficiency across the board, there was a noticeable drop (up to 19 percentage points) on tasks where AI was substandard. This is a subtle reminder that while machines have many benefits, they can inadvertently encourage us to let our guard down.
Underwriting Uncertainty. The U.S. insurance industry is facing increasing pressure from rising costs from natural disasters. $4.6 billion in 2000 is now $100 billion. This surge is primarily due to worsening extreme weather events caused by climate change. As a result, reinsurance premiums and insurance prices have increased globally. In particularly vulnerable areas like California and Florida, insurance companies are pulling back, leaving many areas without coverage. Since 2015, premiums have increased by 21%, leaving many people excluded from important coverage and reducing risk-sharing pools. This grim outlook is underlined by the British Institute of Actuaries’ forecasts. 50% GDP loss between 2070 and 2090 Based on current warming trends. However, it is hoped that these dire predictions will only serve to accelerate the pace of positive change. To shed light on a more hopeful outlook, EV member Sam Butler Sloss recently co-authored an article for RMI highlighting: Common mistakes in energy transition analysis. Notably, many energy commentators tend to be on the pessimistic side.
🔥 I will be thinking about the issue of artificial general intelligence and what can be said about the question “When will AGI come?” In my comments this week.This will be sent to Paid membership By Tuesday. I’ve been thinking seriously about this question for several months, so I’d like to share my perspective. We are sure you will enjoy it.
Investing in climate change technology fell more than 40% In the past 12 months.
In 2022, 58% of US households own stocks, the highest percentage ever. This exceeded the previous high of 53% set during the dot-com boom.
China’s automobile transformation. In about two years, China has gone from a deficit of $30 billion to $40 billion in completed vehicles. $50 billion surplus.
The report, which looked at U.S. data from 1983 to 2019, found that inflation increased the real income of the middle rich by two-thirds, while the real income of the bottom two richest quintiles increased by almost three-thirds. It was revealed that the number increased by 2 times. Inflation reduces real income by 50%.
Pushed by verified user 74% of the most viral misinformation About the Israel-Hamas war in X.
📈 How the first companies to use Google Ads built their business.
💽 Project Silica — Store data on glass plates for 10,000 years.
🌲 The earliest evidence of wood being used for structural purposes is At least 476,000 years ago.
🧠Ah Minimally invasive brain implant Can patients perform surgery at home?
💰 Meat and meat substitutes. Which is more expensive?
🔄 recursive loop. What happens if you describe an image to GPT-4V and then ask Dall-E 3 to generate that description?
📸 Will AI photos ruin past memories??
🕳️ What Serbian caves tell us Weather 2500 years ago.
What is 🌐 Wikipedia’s least popular page?
I was in Dubai this week doing some research on issues of AI governance and the relationship between AI and employment. One question that kept coming up was what China’s take on all of this.in Belt and Road Forum, The Chinese government has announced an AI global governance initiative. The good news is that there is considerable agreement with many of the ways the US, UK and EU are thinking about these issues. This suggests that some kind of multilateral approach may not be impossible.
There is at least one important difference. It is a call to “oppose drawing ideological lines or forming exclusive groups to thwart other countries’ AI development.” Countries that sign this agreement are explicitly challenging U.S. export controls.
But it’s at least a starting point, and the timing, just before the UK-hosted AI Safety Summit, is useful.
And in that event, one of the good news is; Among those in attendance was Yann LeCun.. As we wrote above in our newsletter, Jan was skeptical about aspects of existential risk (and This witty tweet). I have become concerned that too many issues are being raised by proponents or sympathizers of the effective altruism belief system. Lekan has credibility as a counterweight.
Unlike climate change, where there is deep scientific consensus, there is much less agreement among scientists on AI risks. Therefore, input diversity is important.
cheers,
a
PS I posted a really fun survey about AI on LinkedIn. Please vote and share!
What’s new — Community updates
To share the latest information with EV readers, follow these steps: tell me what you’re doing here.