This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, . That was fast. In less than a week since Meta launched its AI model, , startups and researchers have already used it to develop a and an . It will be only a matter of time until companies start launching products built with the model.In my story, I look at the threat LLaMA 2 could pose to OpenAI, Google, and others. Having a nimble, transparent, and customizable model that is free to use could help companies create AI products and services faster than they could with a big, sophisticated proprietary model like OpenAI’s GPT-4. . But what really stands out to me is the extent to which Meta is throwing its doors open. It will allow the wider AI community to download the model and tweak it. This could help make it safer and more efficient. And crucially, it could demonstrate the benefits of transparency over secrecy when it comes to the inner workings of AI models. This could not be more timely, or more important. Tech companies are rushing to release their AI models into the wild, and we’re seeing generative AI embedded in more and more products. But the most powerful models out there, such as OpenAI’s GPT-4, are tightly guarded by their creators. Developers and researchers pay to get limited access to such models through a website and don’t know the details of their inner workings. This opacity could lead to problems down the line, as is highlighted in a new, that caused some buzz last week. Researchers at Stanford University and UC Berkeley found that GPT-3.5 and GPT-4 performed worse at solving math problems, answering sensitive questions, generating code, and doing visual reasoning than they had a couple of months earlier. These models’ lack of transparency makes it hard to say exactly why that might be, but regardless, the results should be taken with a pinch of salt, Princeton computer science professor Arvind Narayanan in his assessment. They are more likely caused by “quirks of the authors’ evaluation” than evidence that OpenAI made the models worse. He thinks the researchers failed to take into account that OpenAI has fine-tuned the models to perform better, and that has unintentionally caused some prompting techniques to stop working as they did in the past. This has some serious implications. Companies that have built and optimized their products to work with a certain iteration of OpenAI’s models could “100%” see them suddenly , says Sasha Luccioni, an AI researcher at startup Hugging Face. When OpenAI fine-tunes its models this way, products that have been built using very specific prompts, for example, might stop working in the way they did before. Closed models lack accountability, she adds. “If you have a product and you change something in the product, you’re supposed to tell your customers.” An open model like LLaMA 2 will at least make it clear how the company has designed the model and what training techniques it has used. Unlike OpenAI, Meta has shared the entire recipe for LLaMA 2, including details on how it was trained, which hardware was used, how the data was annotated, and which techniques were used to mitigate harm. People doing research and building products on top of the model know exactly what they are working on, says Luccioni. “Once you have access to the model, you can do all sorts of experiments to make sure that you get better performance or you get less bias, or whatever it is you’re looking for,” she says. Ultimately, the around AI boils down to who calls the shots. With open models, users have more power and control. With closed models, you’re at the mercy of their creator. Having a big company like Meta release such an open, transparent AI model feels like a potential turning point in the generative AI gold rush. If products built on much-hyped proprietary models suddenly break in embarrassing ways, and developers are kept in the dark as to why this might be, an open and transparent AI model with similar performance will suddenly seem like a much more appealing—and reliable—choice. Meta isn’t doing this for charity. It has a lot to gain from letting others probe its models for flaws. Ahmad Al-Dahle, a vice president at Meta who is leading its generative AI work, told me the company will take what it learns from the wider external community and use it to keep making its models better. Still, it’s a step in the right direction, says Luccioni. She hopes Meta’s move puts pressure on other tech companies with AI models to consider a more open path. “I’m very impressed with Meta for staying so open,” she says. Deeper Learning Face recognition in the US is about to meet one of its biggest tests By the end of 2020, the movement to restrict police use of face recognition in the US was riding high. Around 18 cities had enacted laws forbidding the police from adopting it, and US lawmakers proposed a pause on the federal government’s use of the tech. In the years since, that effort has slowed to a halt. Five municipal bans on police and government use passed in 2021, but none in 2022 or in 2023 so far. Some local bans have even been partially repealed. All eyes on Massachusetts: The state’s lawmakers are currently thrashing out a bipartisan bill that would allow only state police to access a very limited face recognition database, and require them to have a warrant. The bill represents a vital test of the prevailing mood around police use of these controversial tools. Meanwhile, in Europe: Police use of facial recognition technology is also a major sticking point for European lawmakers negotiating the EU countries want their police forces to use the technology more. However, members of the EU Parliament want a more sweeping ban on the tech. The fight will likely be a long, drawn-out one, and it has become existential to the AI Act. Bits and Bytes The White House AI companiesThe Biden administration it had made a pact with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI that they would develop new technologies in a safe, secure, and transparent way. Tech companies pledged to watermark AI-generated content, invest in cybersecurity, and test products before releasing them to the market, among other things. But this is all completely voluntary, so the companies will face no repercussions if they don’t do it. The voluntary nature of this announcement shows just how limited Biden’s powers are when it comes to AI. ChatGPT’s surprising skill: Facial recognitionOpenAI is testing a version of ChatGPT that can recognize and describe people’s faces from pictures. The tool could aid visually impaired people, but could be a privacy nightmare. () Apple has built its own generative AI model and chatbotBetter late than never, I guess. Apple executives have still not decided how they are going to release their model, Ajax, and chatbot, Apple GPT, to consumers. () Meet the Google engineers who pioneered an AI revolutionA nice look at the origin story of the transformer, the AI technology powering today’s generative AI boom, and the team of engineers who built it. Notably, none of them work at Google anymore. ()
July 25, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. What’s next for the moon It’s been more than 50 years since humans last walked on the moon. But starting this year, an array of missions from private companies and national space agencies plan to take us back, sending everything from small robotic probes to full-fledged human landers. The ultimate goal? Getting humans living and working on the moon, and then using it as a way station for possible later missions into deep space.From private missions to hunt for water ice to much-needed updates to international lunar laws, here’s what’s next for the moon. . —Jonathan O’Callaghan Jonathan’s piece is part of our What’s Next series, which takes a look across industries, trends, and technologies to give you a first look at the future. You can check out the rest of the series . How face recognition rules in the US got stuck in political gridlock The US state of Massachusetts has become a hotbed of debate over police use of face recognition. Lawmakers there are considering a bill that would represent a breakthrough on the issue and could set a new tone of compromise for the rest of the country. Tate Ryan-Mosley, our senior tech policy reporter, reported last week on how the governance of facial recognition is being held up in a unique type of political stasis. That’s because the battle between ‘abolish face recognition’ and ‘don’t regulate it at all’ has led to an absence of action. Compromises are the only way forward. . Tate’s story is from The Technocrat, her weekly newsletter digging into the divisions of power in Silicon Valley. to receive it in your inbox every Friday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Twitter’s rebranding as X has begunThe blue bird logo is among the first things on the chopping block. ( $)+ But, knowing Elon Musk, don’t be surprised if the logo changes again soon. ( $)+ Twitter’s name isn’t the problem—it’s everything else. ( $) 2 Sam Altman’s Worldcoin is rolling out across the worldBut the path ahead looks far from smooth. ( $)+ How Worldcoin recruited its first half a million test users. () 3 Ukraine’s live combat data is in hot demandFor military businesses vying to shape the future of warfare, it’s invaluable. ( $)+ Ukraine’s fighters are adapting to unfamiliar territory. ( $)+ Mass-market military drones have changed the way wars are fought. () 4 Sydney has virtually eliminated HIV transmissionThe former AIDS hotspot proves that curbing the disease is possible. ( $)+ Is an embryo model really an embryo? It depends who you ask. ( $) 5 An AI health startup is still extremely reliant on humansDeepScribe says its AI is powerful, but teams of humans are the ones carrying out vital checks and catching errors. ( $)+ Don’t bother asking chatbots for romantic advice. ( $)+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. () 6 Robots make exceptional rescuersThey’re particularly adept at working as teams in hazardous environments. ()+ Why we shouldn’t worry about robots falling over. () 7 What we learnt from the Great PowerPoint Panic of 2003We were told the software was making us stupid, but 20 years on, other threats seem far more important. ( $) 8 Alcohol vending machines are taking over from bartenders in the UKGood luck asking for anything more complicated than beer, though. () 9 India’s rickshaw apps are on the riseIt solves an important problem—tracking one down in the first place.( $) 10 Lab-grown chicken tastes just like… chicken Alternative meat is much better than it used to be—but can it make an environmental difference? ()+ Here’s what we know about lab-grown meat and climate change. () Quote of the day “It’s a fitting end to a phenomenal unwinding of an iconic brand and business.” —Allen Adamson, co-founder of marketing consultancy Metaforce, is not a fan of Twitter’s plans to ditch its instantly-recognizable blue bird logo and rebrand to X, he tells . The big story Capitalism is in crisis. To save it, we need to rethink economic growth. October 2020 Even before the covid-19 pandemic and the resulting collapse of much of the world’s economy, it was clear that capitalism was in crisis. Unfettered free markets had pushed inequality of income and wealth to extremely high levels in the United States, and slow productivity growth in many rich countries had stunted a generation’s financial opportunities. It’s no wonder many have begun questioning the devotion to free markets and faith in the power of economic growth to solve our problems. But while antipathy to growth is nothing new, its reemergence as a movement has taken on a harder political edge that questions whether we need to grow at all. . —David Rotman We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Phew, the double-bill sounds intense, to say the least.+ This was exactly what I needed to hear this morning.+ Denim! Leather! Tiny shorts! George Michael’s was a truly beautiful sight to witness.+ ? Yes please.+ Take a few minutes out of your day to remind yourself just how great really was.
July 24, 2023
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, . This week, I published about efforts to restrict face recognition in the US. The story’s genesis came during a team meeting a few months back, when one of my editors casually asked what on earth had happened to the once-promising campaign to ban the technology. Just several years ago, the US seemed on the cusp of potentially getting police use of the technology restricted at a national level. I even wrote a story in May 2021 titled “.” News flash: I was wrong. In the years since, the push to regulate the technology seems to have ground to a halt. The editor held up his iPhone. “Meanwhile, I’m using it constantly throughout the day,” he said, referring to the face recognition verification system on Apple’s smartphone. My story was an attempt to understand what happened by zooming in on one of the hotbeds for debate over police use of face recognition: Massachusetts. Lawmakers in the state are considering a bill that would be a breakthrough on the issue and could set a new tone of compromise for the rest of the country. The bill distinguishes between different types of technology, such as live video recognition and retroactive image matching, and sets some strict guardrails when it comes to law enforcement. Under the proposal, only the state police could use face recognition, for example. During reporting, I learned that face recognition regulation is being held up in a unique type of political stasis, as Andrew Guthrie Ferguson, a law professor at the American University Washington College of Law who specializes in policing and tech, put it. The push to regulate face recognition technology is bipartisan. However, when you get down to details, the picture gets muddier. Face recognition as a tool for law enforcement has become more contentious in recent years, and Republicans tend to align with police groups, at least partly because of growing fears about crime. Those groups often say that new tools like face recognition help increase their capacity during staffing shortages. Little surprise, then, that police groups have no interest in regulation. Police lobbies and companies that provide law enforcement with their tech are content to continue using the technology with few guardrails, especially as staffing shortages put pressure on law enforcement to do more with less. Having no restrictions on it suits them fine. But civil liberties activists are generally opposed to regulation too. They think that compromising on measures short of a ban decreases the likelihood that a ban will ever be passed. They argue that police are likely to abuse the technology, so giving them to the public, and specifically to Black and brown communities that are already overpoliced and surveilled. “The battle between ‘abolition’ and ‘don’t regulate it at all’ has led to an absence of regulation. That’s not the fault of the abolitionists,” says Ferguson. “But it has meant that the normal potential political compromise that you might’ve seen in Congress hasn’t happened because the normal political actors are not willing to concede for any regulation.” Some abolitionist groups, such as S.T.O.P. in New York, are turning their advocacy work away from police bans toward regulating private uses of face recognition—for example, at . “We see growing momentum to pass bans on private-sector use of facial recognition,” says S.T.O.P.’s executive director, Albert Fox Cahn. However, he thinks eventually we will see a resurgence of calls to ban police use of the technology too. In the meantime, it’s deeply unfortunate that as and become normalized in our lives, regulation is stuck in gridlock, especially when there is bipartisan agreement that we need it. Compromises that set , but are short of an absolute ban, might be the most promising path forward. What I am reading this week This morning, the White House announced in which companies voluntarily agreed to a set of requirements, such as watermarking AI-generated content and submitting to external review. Notably left off the list of requirements were stipulations around transparency and data privacy. The voluntary agreements, while better than nothing, seem pretty fluffy. I really enjoyed Charlie Warzel’s latest piece, which was a in the Atlantic. I am a sap for user-focused technologies. We often don’t think of the 10-digit identity as a breakthrough, but oh … how it is. Regardless of the FTC’s recent losses, President Biden’s team . It’ll be interesting to watch how it plays out and whether the Justice Department can eventually do something to break up Big Tech. What I learned this week This week, I finally dove into our latest magazine issue on accessibility. A really stood out. Since January, US Customs and Border Protection has been using a new app to organize immigration flows and secure initial appointments for entry. One problem, though, is that the app—called CBP One—barely works. It puts a massive strain on people trying to enter the country. Lorena Rios writes about Keisy Plaza, a migrant traveling from Colombia. “When she was staying in a shelter in Ciudad Juárez in March, she tried the app practically every day, never losing hope that she and her family would eventually get their chance.” After seven weeks of constant worry, Plaza finally got an appointment. Rios’s story is heartbreaking—a bit dystopian, but useful, as she really gets at how technology can completely upend people’s lives. Take a read this weekend!
July 24, 2023
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series . We’re going back to the moon. And back. And back. And back again. It’s been more than 50 years since humans last walked on the lunar surface, but starting this year, an array of missions from private companies and national space agencies plan to take us back, sending everything from small robotic probes to full-fledged human landers. The ultimate goal? Getting humans living and working on the moon, and then using it as a way station for possible later missions into deep space. Here’s what’s next for the moon. Robotic missions are leading the charge More than a dozen robotic vehicles are scheduled to land on the moon in the 2020s. On July 14, India launched its Chandrayaan-3 mission, a second attempt from the country to land on the surface of the moon after Chandrayaan-2 crashed there in 2019. That landing attempt will come in August. Hot on its heels are two private companies in the US, Astrobotic and Intuitive Machines, both partly funded by NASA to begin moon landings this year. Astrobotic’s is scheduled to carry a suite of instruments (some from NASA) to the moon’s northern hemisphere later this year to study the surface, including a sensor to hunt for water ice and a small rover to explore. And Intuitive Machines’ Nova-C lander will attempt a lunar first. “Our primary objective is to land softly on the south pole region of the moon, which has never been done before,” said Steve Altemus, the company’s CEO, after NASA recently the original planned landing site. The mission a telescope to image the Milky Way’s center from the moon, another first, and some demonstration lunar data centers. The launch on a SpaceX Falcon 9 rocket is provisionally set for September. Both companies have bigger ambitions. In 2024, Astrobotic hopes to send a NASA rover called VIPER to drive into some of the moon’s and hunt for water ice. Intuitive Machines’ second mission, meanwhile, will deploy a small hopping vehicle that will jump into one of these pitch-black craters and carry a drill for NASA. “There’s quite a lot of excitement around that,” says Xavier Orr, the CEO of the Australian firm Advanced Navigation, which will provide the landing navigation system for Nova-C and the hopper. The craters, he adds, are thought to be “the most likely places of finding ice on the moon.” These private companies are backed by millions of dollars in government money, driven by NASA’s desire to return humans to the moon as part of its Artemis program. NASA wants to expand commercial moon activity in the same way it has helped fund commercial activity in Earth orbit with companies such as SpaceX. “The goal is we return to the moon, open up a lunar economy, and continue exploring to Mars,” says Nujoud Merancy, chief of NASA’s Exploration Mission Planning Office at the Johnson Space Center in Texa. The ultimate plan, Merancy says, is to foster a “permanent settlement on the moon.” Not all are convinced, especially when it comes to how companies will make money on lunar missions outside of funding from NASA. “What is the GDP of lunar activities?” says Sinead O’Sullivan, a former senior researcher at Harvard Business School’s Institute for Strategy and Competitiveness. “Some commercial economy may evolve, but it’s kind of hard to tell.” Humans are going back, too In November 2024, if all goes to plan, the Artemis II mission will send a crew of four astronauts—three American and one Canadian—around the moon on a 10-day mission in NASA’s Orion spacecraft, launched by the agency’s mighty new . Humans have not traveled to the moon since Apollo 17 in 1972. The goal, however, is “not just returning, but staying and exploring,” says Merancy. Artemis II “is really ensuring that the vehicles are ready for longer-duration missions in the future.” Also in November 2024, a SpaceX Falcon Heavy rocket is scheduled to carry the first modules of NASA’s new space station near the moon, called the Lunar Gateway. Gateway is meant to support Artemis missions to the moon, although the exact relationship is still somewhat murky. The first humans back on the moon are due to land in 2025, aboard a SpaceX as part of Artemis III. Much work remains to be done, however, not least proving Starship can launch from Earth (following a botched test flight in April 2023) and be refueled in space. This leaves some in doubt of the 2025 time frame. “A landing in 2029 would be really optimistic,” says Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics in Massachusetts. NASA, meanwhile, has contracted both SpaceX and more recently Jeff Bezos’s competing Blue Origin for its planned landings at the moon’s south pole to prospect for water ice, which can be used both as drinking water and maybe as rocket fuel so that the moon could become a staging point for missions to more distant destinations in the solar system, such as Mars. But the goal “isn’t just Mars,” says Teasel Muir-Harmony, a curator at the National Air and Space Museum in Washington, DC. “It’s learning how to live and work in deep space and creating a sustained presence further than Earth orbit.” Moon laws need updating International laws will need to be updated to cope with this uptick in lunar activity. At the moment, such activities are largely governed by the Outer Space Treaty, signed in 1967, but many of its particulars are vague. “We are getting into areas like private space platforms and lunar mining facilities, for which there really is no clear government precedent,” says Scott Pace, a space policy expert at George Washington University and former executive secretary of the National Space Council in the US. “We have to be responsible for activities in space.” Chris Johnson, space law advisor for the Secure World Foundation in the US, expects to see discussions at the United Nations over the next five or so years to iron out some of the issues. “We’re going to need norms for radio quiet zones, lunar roadways between valleys and craters, and landing pads on the moon,” he says. Or perhaps if emergencies break out with astronauts from different countries on the moon, “everyone has to take shelter at the nearest shelter, whether it’s yours or another’s,” he says. NASA has begun tentative steps toward this goal, getting countries to sign up to its Artemis Accords, a set of guidelines about lunar activities. But they are not legally binding. “We only have a set of principles,” says Johnson. Lunar missions could come thick and fast while these discussions take place, potentially moving us into a new dawn of space travel. “With the International Space Station, we learned how to live and work in low Earth orbit,” says Muir-Harmony. “Now there’s this opportunity to learn how to do that on another celestial body, and then travel to Mars—and perhaps other locations.”
July 24, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Is the digital dollar dead? In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects. Even Facebook has proposed a global digital currency, called Libra. Few eyebrows were raised when the Boston branch of the US Federal Reserve announced a project to research how a CBDC might be technically designed. A hypothetical US central bank digital currency was hardly controversial, after all. And the US couldn’t afford to be left behind. How things change. Three years later, the digital dollar—even though it doesn’t exist——has become political red meat, as some politicians label it a dystopian tool for surveillance. And late last year, the Boston Fed quietly stopped working on its project. So is the dream of the digital dollar dead? . —Mike Orcutt Introducing MIT Technology Review Roundtables I’m excited to announce that MIT Technology Review is launching a new participatory subscriber-only online . They’re 30-minute monthly conversations with our writers and editors aimed at keeping you informed about emerging tech like artificial intelligence, biotechnology, climate change and more. The first, on August 10, will feature David Rotman, MIT Technology Review editor at large, in conversation with editor in chief Mat Honan, and they’ll be discussing . The second, on September 12, I (Charlotte, our news editor!) will be chatting with Melissa Heikkilä, our senior reporter for AI, about a topic she has bags of insight into: Are you a subscriber to MIT Technology Review? If so, get these dates in your diary and join us then! You’ll have an email in your inbox soon with details on how to register. If you’re not a subscriber, what more reason do you need to sign up? Become one today and save up to 17%. Digital subscriptions are temporarily just $69 a year. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Top AI companies have agreed to set up voluntary safeguardsAs part of that, they’ve pledged to develop systems that will flag when a piece of content is AI-generated. ( $)+ Part of the reason we need better methods? Existing AI-text detection tools are really easy to fool. ()+ Biden’s NSA nominee has warned that AI is a growing security threat. ( $) 2 For Europe’s elderly, heat has become the new covidExtreme temperatures not only disproportionately threaten older people’s health, they also isolate them. ( $)+ Surviving the heat is a uniquely grim challenge for homeless people, too. ()+ August is unlikely to bring much relief, at least in the US. ( $)3 Google Search is losing its shineIt just doesn’t feel as useful as it used to. People are turning to Reddit, TikTok and Wikipedia instead. ( $)+ You might think AI chatbots are a viable alternative. You’d be wrong. ()4 Stalkerware is seriously big businessHere’s how one major actor laundered its vast earnings. ()+ Google is failing to enforce its own ban on ads for stalkerware. () 5 TSMC has delayed opening its Arizona chip factoryIt says it’s been hampered by a shortage of skilled workers. ( $)+ The company says the entire semiconductor sector is experiencing a deepening slump. ( $)+ The $100 billion bet that a postindustrial US city can reinvent itself as a high-tech hub. () 6 TikTok is fully aware it has a labor problemJust like Meta, it’s potentially open to being sued by traumatized outsourced moderators. ( $)7 Apple is being hit with manufacturing woesNot just for its headset either—it’s having issues with new iPhones too, which might force it to release fewer units. ( $)8 What happened to America’s internet?Attempts to expand access to high-speed broadband keep being sunk by politicking and partisanship. ()9 Americans apparently don’t care about returning to the moon “Been there, done that.” ( $)+ The Hubble telescope spotted glittering shards of debris from NASA’s asteroid smash last September. ( $)10 Want to keep the Barbie craze going? Play these video gamesIf you think Barbie spin-offs are new… boy do I have news for you. () Quote of the day “I have been feeling pretty unhappy and overwhelmed with my job. At the end of the day I can’t wait to go home and turn off my phone and have a drink and get away from it all.” —An entry in the diary of Caroline Ellison, star witness in the FTX case as a former executive and girlfriend of Sam Bankman-Fried, just a few months before it all imploded, the reports. The big story Inside the experimental world of animal infrastructure June 2022 Around the world, cities are building a huge variety of structures intended to mitigate the impacts of urbanization and roadbuilding on wildlife. The list includes green roofs, tree-lined skyscrapers, living seawalls, artificial wetlands, and all manner of shelters and “hibernacula.” But the data on how effective these approaches are remains patchy and unclear. That is true even for wildlife crossings, the best-studied and most heavily funded example of such animal infrastructure. . —Matthew Ponsford We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Life’s too short to drink bad coffee. Maximize your chances of making a good one with .+ Cats’ might be down to the way they see the world. + Think your job’s tough? ’.+ is back in a big way—here’s how to make it extra tasty.+ A viral Twitter thread has reached its natural conclusion: .
July 21, 2023
It’s summer 2020. The world is under a series of lockdowns as the pandemic continues to run its course. And in academic and foreign policy circles, digital currencies are one of the hottest topics in town. China is well on its way to launching its own central bank digital currency, or CBDC, and many other countries have launched CBDC research projects. Even Facebook has proposed a global digital currency, called . So when the Boston branch of the US Federal Reserve announces Project Hamilton, a collaboration with MIT’s Digital Currency Initiative, to research how a CBDC might be technically designed—it doesn’t raise many eyebrows. A hypothetical US central bank digital currency is hardly controversial, after all. And the US cannot afford to be left behind. How things change. Three years later, the digital dollar—even though it doesn’t exist and the Fed says it has no plans to issue one—has become political red meat. Tapping into voters’ widespread opposition to government surveillance, a group of anti-CBDC politicians has emerged with the message that the digital dollar is something to fear. It’s difficult to pinpoint when the dynamic changed, but a distinct brand of CBDC alarmism seemed to pick up after President Joe Biden signed an executive order in March 2022 stating that his administration would “[place] the highest urgency on research and development efforts into the potential design and deployment options of a United States CBDC.” Now legislators in both houses of Congress have introduced bills aimed at making sure a CBDC doesn’t see the light of day. Presidential candidates are even campaigning against it. “Anyone with their eyes open could see the danger this type of arrangement would mean for Americans who … would like to be able to conduct business without having the government know every single transaction they’re making in real time,” Florida governor Ron DeSantis, who is running for the Republican nomination for president, In campaign speeches, DeSantis has described a dystopian future in which the government uses its CBDC network to block people from buying guns or fossil fuel. Not only does the Fed have no plans to issue a digital currency, but it has repeatedly said it wouldn’t do so without authorization from Congress. How one might work—including how closely it might —is still a wide-open question that can only be answered through research and testing. Project Hamilton’s goal was to build and test a prototype of just one component of a potential system: a way to securely and resiliently handle the same quantity of transactions that the major payment card networks process. Hamilton’s first phase demonstrated a feasible technical approach, and the researchers promised a “Phase 2” that would explore sophisticated approaches to privacy and offline payments. But late last year, shortly after the project came under scrutiny from anti-CBDC legislators, the Boston Fed ended Hamilton. Now the sort of technical design research that Project Hamilton exemplified may have to come from outside the central bank, which prefers to remain politically neutral. And a digital dollar looks less and less likely by the day. The case for cash Opponents of a hypothetical US CBDC cast it as a solution in search of a problem. Dollars are already digital, after all. If you paid with a debit card recently, did you not pay with digital dollars? China’s move to pilot a consumer central bank digital currency is not reason by itself to pursue one, they argue. Libra failed to launch; a global digital currency run by a tech company is no longer an issue. What purpose would a government-issued digital currency serve other than to give the government a tool for financial surveillance and control? But there is a problem—probably one that you’ve noticed yourself. Physical cash is going away. Fewer and fewer vendors are accepting bills and coins. On top of that, consumers are simply choosing to use less cash. That’s in part out of convenience, but there’s another big reason: you can’t use cash to buy things on the internet. In the US, cash payments represented just 18% of all payments in 2022—down from 31% in 2016, according to by the San Francisco Fed. Outside the US, things are even further along the road to a cashless society. The decline of cash is a primary reason are researching the idea of creating their own digital currencies. The solution is a digital currency with all the features of physical cash, according to Willamette University law professor . That we can’t use cash on Amazon is only one argument for government-issued digital cash, says Grey. In the US, plenty of people rely on bills and coins because they don’t have bank accounts and can’t get credit or debit cards. The Federal Deposit Insurance Corporation estimates that in 2021, 5.9 million US households were “unbanked.” Besides that, Grey argues, cash has unique “social features” that we should be careful to preserve, including its privacy and anonymity. No one can trace how you spend your coins and bills. “I think anonymity is a social good,” he says. Last year, Grey helped author a US House called the Electronic Currency and Secure Hardware Act (ECASH). The legislation, which was introduced by Representative Stephen Lynch of Massachusetts, would have directed the Department of Treasury to create a digital dollar that could be used both online and offline and have cash-like features, “including anonymity, privacy, and minimal generation of data from transaction.” It didn’t make it out of the Financial Services Committee, but Grey says there are plans to reintroduce it this year. DeSantis and other CBDC opponents most likely agree with Grey that we should replicate the privacy of cash in digital form—after all, they claim to be defending Americans against a financial surveillance state. But whereas Grey is advocating for a government-controlled system, they seem to prefer something more like decentralized cryptocurrency networks, which are not controlled by any central authority. DeSantis recently signed a bill explicitly banning a “centralized” digital dollar in Florida, apparently leaving the door open for one that is decentralized. Representative Tom Emmer of Minnesota, who introduced a this year that would prohibit the Fed from issuing a digital currency, multiple times that a CBDC must be “open, permissionless, and private.” “Permissionless” is a term enthusiasts use for crypto networks like Bitcoin and Ethereum, which are open to anyone with an internet connection. Emmer, a Republican, is one of Congress’s most outspoken crypto enthusiasts. A spectrum of possible designs It is not clear how currency issued by a central bank could ever be controlled by a permissionless crypto network. And Bitcoin and similar cryptocurrencies have privacy issues of their own. Though users are pseudonymous, information about the sender, the recipient, and the amount of every transaction is published on the blockchain. Investigators are skilled at using clues, like personal information that users share with crypto exchanges, to discover users’ real identities. Either way, using a blockchain network won’t suffice, says Grey, because many of the same people who rely on cash also lack internet access. He envisions cards that could be tapped together or to smartphones to transfer value anonymously, online or offline. Like physical dollars, the digital stand-ins would be so-called bearer instruments, meaning that possession gives the holder rights to ownership. There are a number of unanswered technical questions about how to pull all this off securely, however—a fact that Grey acknowledges. Unanswered technical questions were also the motivation behind Project Hamilton. The researchers set out to investigate possible designs for a “resilient transaction processor” that could handle at minimum tens of thousands of transactions per second, the capacity they determined necessary to handle the volume of retail transactions in the US. But they also sought to develop a transaction processor that was flexible enough in its design to leave open a range of options for other parts of the system, like technologies for privacy and offline payments. The software they came up with does not use a blockchain, but it borrows components from Bitcoin. , director of the Digital Currency Initiative at the MIT Media Lab, says it’s possible to break a blockchain system down into its component parts and then apply some but not all of those pieces in a different context. For example, one piece is a blockchain’s decentralized nature, which makes it possible to run a cryptocurrency system without relying on any one person to control it. The team decided that a CBDC would not need this property, since it would be run by a central bank. Another property of blockchains is known as Byzantine fault tolerance (BFT), which allows the network to keep functioning even if malicious participants are acting dishonestly. The Hamilton team decided they could assume that since the system would be run by a single central bank, there wouldn’t be malicious participants, and so BFT wouldn’t be required. Ditching BFT and decentralized governance has its benefits. In Bitcoin, maintaining them both makes the system expensive and slow to run, in part because data must be replicated on every computer on the network. The result is that Bitcoin can only process around seven transactions per second. In early 2022, the Hamilton team a system capable of processing 1.7 million transactions per second—much faster than even the Visa network, which Visa is able to process 65,000 transactions per second. Like Bitcoin, Hamilton’s transaction processor used cryptographic signatures to authorize payments. It also used Bitcoin’s method for recording transactions, called the unspent transaction outputs (UTXO) model, which stops people from spending the same coin twice. The details of the UTXO model are complicated, but it works because each transaction references the specific coins being spent. Narula stresses that Project Hamilton was a “first step” toward understanding how a CBDC might be designed. The team made the software open source so that other teams could build on it. But it was not advocating for specific design decisions. There is a spectrum of possible CBDC designs, ranging from traditional bank accounts that the Fed offers directly to consumers (currently it only offers accounts to banks) to something that looks like a “digital bearer instrument,” Narula says. Besides demonstrating the ability to handle lots of transactions, Hamilton also showed that “if designers want to, it’s possible to build a system that stores very little data about transactions, users, and even outstanding balances,” says Narula. “A big misconception about CBDCs right now is this assumption that they have to be built in a way where whoever is running it can see everything.” So… what’s next? Nonetheless, not even a fundamental research project like Hamilton was able to escape the ire of anti-CBDC politicians. In December of last year, Emmer and eight other members of Congress sent a to the president of the Boston Fed, arguing that there had been “insufficient visibility into the interaction between Project Hamilton and the private sector.” The legislators cited an FAQ from the stating that the Fed had been working with “government, academia, and the private sector” to learn about “potential use cases, a range of design options, and other considerations” related to CBDCs. The letter went on to ask several questions, including whether the Boston Fed intended to fund startups interested in designing CBDCs and whether any firms involved in the project might be able to “exploit a regulatory advantage over competitors.” Emmer’s office did not respond to MIT Technology Review’s questions regarding whether it ever received answers to the questions in the letter. But the Federal Reserve does not invest in startups. And it’s not surprising that Project Hamilton would openly take input from the private sector, because many of the most innovative ideas for digital currency technology lie in the commercial arena. The letter’s final question asked how Project Hamilton was addressing concerns about “financial privacy and financial freedom” in a CBDC system. In fact, the “Phase 2” promised in the Hamilton , which was published in February of 2022, was explicitly meant to entail research into the use of advanced cryptography to “greatly increase user privacy from the central bank.” But when the project shut down in December, the ement made no mention of Phase 2. The Fed, which aims to stay out of politics whenever possible, hasn’t stopped doing research on CBDCs, says , a professor of finance at Stanford’s graduate school of business. But it has slowed considerably, and “nobody is charging ahead openly” the way Hamilton did, he says. Duffie speculates that “maybe Project Hamilton would have had another phase” if it had not been for Emmer’s letter. A spokesperson for the Boston Fed declined to answer questions about Phase 2. Project Hamilton “was completed at the end of 2022,” the spokesperson said in an emailed statement, adding that the Boston Fed “continues to contribute to ongoing Federal Reserve System research that aims to deepen the Federal Reserve’s understanding of the technology that could support the issuance of a CBDC.” The spokesperson also reiterated that the Fed “has made no decision on issuing a CBDC and would only proceed with the issuance of a CBDC with an authorizing law.” According to MIT’s Narula, the collaboration with the Boston Fed “reached a natural end.” But the Digital Currency Initiative has continued working on the research project formerly known as Hamilton and still hopes to publish some of that work. “The only way to really truly understand these types of systems is to build and test them,” she says.
July 21, 2023
On August 10, MIT Technology Review is launching Roundtables, a participatory subscriber-only online event series, to keep you informed about emerging tech. Subscribers will get exclusive access to 30-minute monthly conversations with our writers and editors about topics they’re thinking deeply about—including artificial intelligence, biotechnology, climate change, tech policy, and more. (If you’re not yet a subscriber, .) The first Roundtables event, The AI economy, will feature David Rotman, MIT Technology Review editor at large, in conversation with editor in chief Mat Honan. They will discuss David’s recent coverage on and and, more broadly, to create innovation hubs. There is little doubt that generative AI will affect the economy—but how, exactly, remains an open question. Despite fears that these AI tools will upend jobs and exacerbate wealth inequality, early evidence suggests the technology could help level the playing field—but only if we deploy it in the right ways. Likewise, the Inflation Reduction Act and the Chips Act both have huge implications for the economy, and for efforts to revive America’s high-tech manufacturing base. Rotman and Honan will look at who stands to benefit from these transformative economic events, and what the risks are. Then, on September 12, our next edition of Roundtables will tackle another important question: How should we regulate AI? Charlotte Jee, news editor, and Melissa Heikkilä, senior reporter for AI, will discuss the state of AI regulation today and what to watch for in the months ahead. focuses on creating guardrails for “high-risk” AI used in health care and education systems. In the US, a patchwork of federal regulations and state laws govern certain aspects of automated systems, while work on a federal framework remains in the early stages. Meanwhile, the OECD has set forth a set of nonbinding principles for AI development, and are also taking shape. Heikkilä and Jee will walk subscribers through these and other approaches, mapping out the landscape of proposed policies that aim to redirect AI toward serving societal goals or address potential biases that put people at risk. If you’re a subscriber, check your email for details on how to register for both events. (.) We hope you’ll join us as we explore what’s happening now and what’s coming next in emerging technologies.
July 20, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Face recognition in the US is about to meet one of its biggest tests Just four years ago, the movement to ban police departments from using face recognition in the US was riding high. By the end of 2020, around 18 cities had enacted restrictive laws, and lawmakers proposed a pause on the federal government’s use of it. In the years since, that effort has slowed to a halt. Some local bans have even been partially repealed, and today, few seriously believe that a federal ban could pass in the foreseeable future. Right now in the US, facial recognition regulations are trapped in political gridlock. However, in Massachusetts there is hope for those who want to restrict police access to face recognition, thanks to a bipartisan state bill being thrashed out by its lawmakers which would do exactly that.A lot rides on whether this law gets passed. It could usher in a new age of compromise, and could set the standard for how face recognition is regulated elsewhere. On the other hand, if a vote is delayed or fails, it would be yet another sign that the movement is waning. . —Tate Ryan-Mosley Want to know where batteries are going? Look at their ingredients. Batteries are going to be a key part of how we tackle climate change. They’ll transform transportation and could also be crucial for storing renewables like wind or solar power for times when those resources aren’t available. So in a way, they’re a central technology for the two sectors responsible for the biggest share of emissions: energy and transportation. The International Energy Agency has just released a new report on the state of critical minerals in energy, which has some interesting battery-related tidbits. If you want to understand what’s next for batteries, you need to look at what’s happening right now in their materials. . —Casey Crownhart Casey’s story is from The Spark, her weekly climate newsletter. to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Apple is plotting its own answer to ChatGPTIt’s part of Tim Cook’s plans to embrace AI on a “very thoughtful basis.” ( $)+ AI might not be all bad news for Hollywood. ( $)+ Google is experimenting with AI for journalists. ( $)+ Researchers think they’ve proved that GPT-4 is getting worse. ()+ ChatGPT is everywhere. Here’s where it came from. () 2 It’s about to get tougher for tech companies to mergeCourtesy of new US guidelines designed to prevent monopolies. ( $) 3 We all know climate change is getting worseBut experts are divided over whether it’s doing so faster than expected, and if so why. ( $)+ Do these heat waves mean climate change is happening faster than expected? () 4 A bill that curbs US government access to our data has been greenlitIf passed, it’ll prevent agencies from buying data without a warrant. () 5 EVs have a tire pollution problemMaking them smaller and slower are a few ways to reduce it. ( $)+ Recycling cars is big business these days. ( $) 6 It looks like Netflix’s password-sharing crackdown paid offAlmost 6 million subscribers have stumped up for their own accounts. ( $) 7 The world is racing to unlock the power of geothermal energyFracking is a contentious way to do it, though. ( $)+ This geothermal startup showed its wells can be used like a giant underground battery. () 8 What is a head of AI?Plenty of companies don’t know, but they’re hiring them anyway. () 9 Gen Z is freezing their eggs But even with youth on their side, success rates are still on the low side. ()+ There’s still so much we don’t understand about fertility. ( $)+ I took an international trip with my frozen eggs to learn about the fertility industry. () 10 This free music streaming app is gaining fans in Latin AmericaIt’s ad-powered, and crucially, it’s legal. () Quote of the day “It’s sort of like you’re a therapist. They tell you their life stories.”— Ylonda Sherrod, an AT&T call center worker, tells the that replacing employees with AI systems will leave customers missing the human touch. The big story Why the balance of power in tech is shifting toward workers February 2022 Something has changed for tech giants. Even as they continue to hold tremendous influence in our daily lives, a growing accountability movement has begun to check their power. Led in large part by tech workers themselves, a movement seeking reform of how these companies do business has taken on unprecedented momentum, particularly in the past year. Concerns and anger over tech companies’ impact in the world is nothing new, of course. What’s changed is that workers are increasingly getting organized. . —Jane Lytvynenko We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + The is kicking off today! And England are the favorites to challenge the reigning champions.+ This extreme sounds like the perfect way to keep your beers icy cold.+ Wow, the of the Middle Ages sure knew how to have a good time.+ Hey, hey, hey, what would an entire song of sound like?+ The unbearable brilliance of .
July 20, 2023
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, . I was chatting with a group recently about which technology is the most crucial one to address climate change. With the caveat that we’ll definitely need a whole host of solutions to truly tackle the challenge, my personal choice would have to be batteries. This might not be a surprise, since I’m almost constantly going on about batteries—If you want to read more on the topic, we’ve got loads to choose from on the site. You can start , or . Batteries are going to transform transportation and could also be key in storing renewables like wind or solar power for times when those resources aren’t available. So in a way, they’re a central technology for the two sectors responsible for the biggest share of emissions: energy and transportation. And if you want to understand what’s coming in batteries, you need to look at what’s happening right now in battery materials. The International Energy Agency just released a new report on the state of critical minerals in energy, which has some interesting battery-related tidbits. So for the newsletter this week, let’s dive into some data about battery materials. So what’s new with battery materials? This probably isn’t news to you, but EV sales are growing quickly—they made up and will reach 18% in 2023, according to the IEA. This global growth is one of the reasons we here at MIT Technology Review put “” on our list of breakthrough technologies this year. Add to the steady market growth the fact that around the world, EV batteries are getting bigger. That’s right—not just in the US, which is infamous for its massive vehicles. The US still takes the cake for the largest average battery capacity, but the inflation of battery size is a worldwide phenomenon, with both Asia and Europe seeing a similar or even more dramatic jump in recent years. Add up the growing demand for EVs, a rising battery capacity around the world, and toss in the role that batteries could play for storage on the grid, and it becomes clear that we’re about to see a huge increase in demand for the materials we need to make batteries. Take lithium, one of the key materials used in lithium-ion batteries today. If we’re going to build enough EVs to reach net-zero emissions, lithium demand is going to increase roughly tenfold between now and 2040. Lithium is one of the most dramatic examples, but other metals, like copper and nickel, are also going to be in high demand in the coming decades (you can play around with the IEA’s data explorer for yourself ). We’re not going to run out of any of the materials we need to generate renewable energy, . Batteries could be a tighter scenario, but overall, experts say that we do have enough resources on the planet to make the batteries we need. And , we should eventually get to a place where there’s a stable supply of materials from old batteries. But we’ve already started to see what dramatic increases in material demand can mean in the short-term for the battery market. Recently, prices for lithium and some other metals have seen huge spikes as battery manufacturers scrambled to meet the immediate demand. That caused prices for lithium-ion batteries to for the first time ever. What does all this mean? So we’re seeing huge demand increases that are only going to continue, and while there are enough materials in the long term, there could be some short-term scrambles for purified and processed battery materials. That’s going to shape the battery world going forward, and there are a couple of ways that could play out: First, automakers are going to get even more involved with the raw materials they need to make batteries. Their business depends on having these materials consistently available, and they’re already making moves to secure their own supply. As of 2023, all but one of the world’s top 10 EV makers have signed some sort of long-term offtake deal to secure raw materials. Five have invested in mining, five have invested in refining, and almost all those deals have happened since 2021. Supply constraints will also push new innovation in batteries. We’ve already seen the start of this: cobalt has been a crucial ingredient in cathodes for lithium-ion batteries for years. But the metal has come under scrutiny because its mining has been linked extensively to forced and child labor. In recent years tech giants and EV makers have to use only responsibly mined cobalt. And at the same time, battery makers started turning to chemistries that contain less cobalt, or even cut out the metal entirely, partly in an effort to cut costs. Lithium iron phosphate batteries don’t contain any cobalt, and they’ve grown from a small fraction of EV batteries to about 30% of the market in just a few years. Low-cobalt options have also gained traction just since 2019. I think we’re going to keep seeing new, exciting options in the battery world, in part because of these materials constraints. could play a major role in grid-scale storage, for example, and we could also see more in cheap EVs soon. I don’t pick favorites when it comes to climate technologies, but I’m always watching the battery world especially closely. So stay tuned for more on the crucial role of materials for the future of batteries—and in the meantime, check out some of our recent stories on the topic. Related reading I wrote in January about this year. I think my predictions are playing out pretty well so far. Lithium iron phosphate batteries could help slash EV prices, . I see a lot of myths around climate technology and materials—and I busted a few Keeping up with climate There are record-breaking heat waves across the US, China, and Europe. () → I wrote about the limits of the human body in extreme heat in 2021. () Speaking of heat, a group of scientists created an especially white paint that can reflect about 98% of the sun’s rays. It could help keep buildings cooler. () Among the most important components in many fusion reactors are the magnets. I loved this in-depth look at the role of superconducting tape inside the tokamak reactor that Commonwealth Fusion Systems is building. () Diablo Canyon is California’s last nuclear plant and the state’s single largest energy source. It’s scheduled to come offline in 2025—but whether or not that will happen as planned is still to be determined. () Some oil companies are getting into the carbon removal game. Their involvement with the technology could make things complicated for its role in cutting emissions. () The Biden administration is putting a lot of money into “climate-smart” crops, which could help pull more carbon out of the atmosphere and store it. But critics are concerned that we don’t understand or measure enough to know how well these plans would work. () These companies want to replace polluting diesel generators with batteries. () Low-quality batteries found in some e-bikes can be dangerous, and they’ve sparked several fires in New York City in recent months. The food delivery workers who rely on these bikes could use support from the apps that broker their work, like Uber and DoorDash. ()
July 20, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Meta’s latest AI model is free for all The news: Meta is going all in on open-source AI. The company has unveiled LLaMA 2, its first large language model that’s available for anyone to use—for free. It’s also releasing a version of the AI model that people can build into ChatGPT-style chatbots. Why it matters: The idea is that by releasing the model into the wild and letting developers and companies tinker with it, Meta will learn important lessons about how to make its models safer, less biased, and more efficient.But… Many caveats still remain. Meta is not releasing information about the data set that it used to train LLaMA 2, and it still spews offensive, harmful, and otherwise problematic language, just like rival models. Meta also cannot guarantee that it didn’t include copyrighted works or personal data, according to a company research paper shared exclusively with MIT Technology Review. . —Melissa Heikkilä Spotting Chinese state media social accounts continues to be a challenge It’s no secret that Chinese state-owned media are active on Western social platforms. But sometimes they take a covert approach and distance themselves from China, perhaps to reach more unsuspecting audiences. Such operations have been found to target Chinese- and English-speaking users in the past. Now, a study published last week has discovered another network of Twitter accounts that seems to be obscuring its China ties—and this time, it’s made up of Spanish-language news accounts targeting Latin America. . —Zeyi Yang This story is from China Report, Zeyi’s weekly newsletter covering tech in China. to receive it in your inbox every Tuesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 ChatGPT could become an advanced facial recognition machineBut OpenAI wants to avoid that outcome at all costs. ( $)+ China is concerned AI could become a “runaway horse.” ()+ How to stop AI from recognizing your face in selfies. () 2 North Carolina is policing online abortion discussionsIn theory, users could be prevented from posting and reading about abortion access. ( $)3 Calls for AI companies to recompense authors are growing louderMargaret Atwood is among the high-profile names to join the charge. ( $)+ Microsoft has started charging $30 a month for its generative AI. ( $)+ OpenAI’s hunger for data is coming back to bite it. () 4 The US has blacklisted more overseas spyware companiesIn a bid to deter investors from sinking cash into seemingly-dodgy companies. ( $)+ US lawmakers are cracking down on surveillance overreach. ( $)+ Snapchat has been accused of promoting Saudi Arabia’s royals. () 5 Temperatures are soaring in Death ValleyAnd it’s looking increasingly likely it’ll break its all-time record. ( $)+ Tourists are getting in on the act, too. () 6 The Pentagon doesn’t want you to track its planes It’s had enough of civilians keeping an eye on mysterious aircraft. () 7 Lemon8 is flopping ByteDance’s other social media platform’s biggest problem? It’s bland. ()+ New apps need to break out of the social doom cycle. ( $) 8 Who uses WeightWatchers in the age of Ozempic?Loyal WeightWatchers feel betrayed by the company’s decision to embrace weight-loss medications. ( $)+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? () 9 How to tell if your baby monitor is vulnerable to hackingA new label will help consumers in the US to decide. ()+ Electric vehicles are at high risk, too. ( $) 10 Your phone number isn’t dead yetFor many people, it’s a reminder of where they’ve come from. ( $) Quote of the day “I’m worried about regular intelligence!” —Actor Danny Trejo tells he’s got bigger concerns than the rise of AI during an interview about the Hollywood strikes. The big story Minneapolis police used fake social media profiles to surveil Black people April 2022The Minneapolis Police Department violated civil rights law through a pattern of racist policing practices, according to a damning report by the Minnesota Department of Human Rights. The report found that officers stop, search, arrest, and use force against people of color at a much higher rate than white people, and covertly surveilled Black people not suspected of any crimes via social media. The findings are consistent with MIT Technology Review’s investigation of Minnesota law enforcement agencies, which has revealed an extensive surveillance network that targeted activists in the aftermath of the murder of George Floyd. . —Tate Ryan-Mosley and Sam Richards We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Interesting— has a new podcast.+ This ‘’ rising from the deep is pretty unnerving. + I wish I was small enough to benefit from this .+ H.G Wells was a man of many talents, including incredibly prescient .+ This is fun: what birdwatching in can teach us.
July 19, 2023
This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. to receive it in your inbox every Tuesday. It’s no secret that Chinese state-owned media are active on Western social platforms, but sometimes they take a covert approach and distance themselves from China, perhaps to reach more unsuspecting audiences. Such operations have been found to target Chinese- and English-speaking users in the past. Now, has discovered another network of Twitter accounts that seems to be obscuring its China ties. This time, it’s made up of Spanish-language news accounts targeting Latin America. Sandra Quincoses, an intelligence advisor at the cybersecurity research firm Nisos, found three accounts posting news about Paraguay, Chile, and Costa Rica on Twitter. The accounts seem to be associated with three Chinese-language newspapers based in those countries. All three are subsidiaries of a Brazil-based Chinese community newspaper called South America Overseas Chinese Press Network. Very few of the posts are overtly political. The content, which is often the same in all three accounts, usually consists of Spanish-language news about Chinese culture, Chinese viral videos, and one panda post every few days. The problematic part, Quincoses says, is that they obscure the sources of their news posts. The accounts often post articles from China News Service (CNS), one of the most prominent Chinese state-owned publications, but they do so without attribution. Sometimes the accounts will go halfway toward attribution. They might specify, for example, that the news is from “Twitter •mundo_china” without actually tagging the @mundo_China, an account affiliated with the Chinese state broadcaster. “When you do not mention Twitter accounts with the proper “@” format, tools that collect from Twitter to do analysis don’t pick up on that,” says Quincoses. As a result, these accounts can fly under the radar of social network analysis tools, making it hard for researchers to associate them with accounts that are clearly related to the Chinese government. It’s unclear whether these accounts and the newspapers they belong to are controlled directly by Chinese state media. But as obscure as they are, there are real Chinese diplomats following them, suggesting official approval. And one government outlet—CNS—is working closely with these newspapers. CNS is directly owned by the Chinese Communist Party’s United Front Work Department. , it started fostering ties with outlets aimed at Chinese immigrant communities around the world. Today, CNS and these immigrant community newspapers often co-publish articles, and CNS invites executives from the publications to visit China for a conference called the Forum on the Global Chinese Language Media. Some of these publications have often been accused of , the main example being As media outlets enter the digital age, there is more evidence that these overseas diaspora publications have close ties with CNS. Sinoing (also known as Beijing Zhongxin Chinese Technology Development or Beijing Zhongxin Chinese Media Service), a wholly owned subsidiary of CNS, is the developer behind 36 such news websites across six continents, the Nisos report says. It has also made mobile apps for nearly a dozen such outlets, including the South America Overseas Chinese Press Network, which owns the three Twitter accounts. These apps are also particularly invasive when it comes to data gathering, the Nisos report says. At the same time, for an overseas social media manager, CNS explicitly wrote in the job description that the work involves “setting up and managing medium-level accounts and covert accounts on overseas social platforms.” It’s unclear whether the three Twitter accounts identified in this report are operated by CNS. If this is indeed a covert operation, the job has been done a little too well. Though they post several times a day, two of the accounts have followers in the single digits, while the other one has around 80 followers—including a few real Chinese diplomats to Spanish-speaking countries. Most of the posts have received minimal engagement. The lack of success is consistent with China’s social media propaganda campaigns in the past. This April, Google identified in “a spammy influence network linked to China,” but the majority of accounts had 0 subscribers, and over 80% of their videos had fewer than 100 views. and identified similar unsuccessful attempts in the past, too. Of all the state actors she has studied, Quincoses says, China is the least direct when it comes to the intentions of such networks. They could be playing the long game, she says. Or maybe they just haven’t figured out how to run covert Twitter accounts effectively. According to Quincoses, these accounts were never among those Twitter labeled as government-funded media (a practice it dropped in April). This could be related to the limited traction the accounts got, or to the efforts they made to obscure their ties to Chinese state media. As other platforms are emerging to take on Twitter, Chinese state-owned publications have begun to appear on them too. Xinhua News Service, China’s main state-owned news agency, has several accounts on Mastodon, one of which still posts regularly. And CGTN, the country’s state broadcaster, has an account on Threads that already has over 50,000 followers. Responding to an inquiry from the Australian government, it plans to add labels for government-affiliated media soon. But can it target accounts like these that are trying (and failing) to promote China’s image? They may be small fish now, but it’s better to catch them early before they grow influential enough, like their more successful peers from Russia. Do social media users need better tools to sort out what might be government-affiliated media? Tell me at zeyi@technologyreview.com. Catch up with China 1. John Kerry, the US climate envoy, is visiting China to restart climate negotiations between the two countries. () 2. Executives of American chip companies, including Intel, Qualcomm, and Nvidia, are flocking to Washington to talk the administration out of more curbs against China. () 3. The Taiwanese chip giant TSMC is known for harsh workplace rules imposed to protect its trade secrets, including a ban on Apple Watches at work. Now, facing difficulty attracting talent, the company is relaxing those rules. () 4. A Kenyan former content moderator for TikTok is threatening to sue the app and its local content moderation contractor, claiming PTSD and unfair dismissal. () 5. Amazon sellers say their whole stores—including images, descriptions, and even product testing certificates—have been cloned by sellers on Temu, the rising cross-broader e-commerce platform from China. () 6. Microsoft says Chinese hackers accessed the email accounts of Commerce Secretary Gina Raimondo and other US officials in June, but they didn’t get any classified email. () 7. Badiucao, an exiled Chinese political cartoonist, is carefully navigating security risks as he tours his artworks around the world. () Lost in translation As image-making AIs become increasingly popular, some Chinese fashion brands are ditching real human models and opting for AI-generated ones. that some Stable Diffusion users are charging Chinese vendors 15 RMB (about $2) for an AI-generated product catalogue photo. A specialized website (still built on the open-source Stable Diffusion algorithm) allows vendors to customize the look of the model for just $2.80. Meanwhile, the cost of a photography session with a human model usually comes down to about $14 per photo, according to professional model Zhao Xuan. AI has already started taking jobs from human models, Zhao said, and it’s promoting unrealistic beauty standards in the industry. “The emergence of AI models is popularizing extreme aesthetics and causing professional models to have body shame,” she said. And the technology is still in its early stages: commercially available services often take more than a week, and the quality of the result is variable. SOCIAL MEDIA SCREENSHOTS COLLECTED BY AI LANMEIHUI. One more thing Some Chinese workers are being asked to use AI tools but find that the process of tinkering with them takes too much time. As a result, they’ve been faking using ChatGPT or Midjourney and instead doing their job the old-fashioned way. One social media copywriter managed to mimic ChatGPT’s writing style so well that his boss was fully convinced it had to be the work of an AI. The boss then showed it around the office, asking other colleagues to generate articles like this too, .
July 19, 2023
Today’s retailers are faced with a clear opportunity for transformation. Consumer expectations are constantly evolving, challenging retailers to keep pace. A blend of online and in-person shopping forged during the pandemic persists, forcing retailers to deliver a highly personalized omnichannel experience. And retailers’ values are becoming as important to consumers as their products and services. “As consumers, we are more sophisticated shoppers. We have so much buying power with the mobile technology at our fingertips and high expectations,” says Mike Webster, senior vice president and general manager at Oracle Retail. “And despite the grand promises of retail technology, the shopping experience may leave us underwhelmed due to a poor execution.” This is a clear call for many retailers to create customer-centric shopping experiences. Forget about a laser-like focus on product development and delivery. Rather, savvy retailers are creating holistic, personalized shopping experiences that engage and fulfill customer needs throughout the customer journey. Consumers want this personal touch: 66% say they want brands to reach out to them, with personalized messages such as discounts and offers on items they’ve purchased before (44%) or predictions about products they may like (32%), according to a 2022 consumer research report by Oracle Retail. “In a world where the consumer is getting more and more diverse, more and more segmented, and more and more individualistic, it’s critical that retailers reimagine how to put the customer at the heart of their processes,” says Daniel Edsall, principal and global grocery leader at Deloitte Consulting LLP. But while shifting focus from traditional merchandising to a fully customer-centered view is imperative, retailers must overcome some significant obstacles to succeed. Many are burdened by legacy technology that is expensive to maintain and difficult to reconfigure. Labor shortages continue to hamper retailers’ efforts to embark on new endeavors. And pandemic-induced shockwaves can still be felt in the form of supply chain disruptions and delivery delays. The good news is that there are ways to embrace a more customer-centric business model while addressing modern-day labor and technology challenges. One key: cloud-based technology platforms that enable technology innovation and empower retailers to shift from siloed product categories and departments to a holistic view of the customer, inventory, and operations. . This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
July 18, 2023
Building fair and transparent systems with artificial intelligence has become an imperative for enterprises. AI can help enterprises create personalized customer experiences, streamline back-office operations from onboarding documents to internal training, prevent fraud, and automate compliance processes. But deploying intricate AI ecosystems with integrity requires good governance standards and metrics. To deploy and manage the AI lifecycle—encompassing advanced technologies like machine learning (ML), natural language processing, robotics, and cognitive computing—both responsibly and efficiently, firms like JPMorgan Chase employ best practices known as ModelOps. These best governance practices involve “establishing the right policies and procedures and controls for the development, testing, deployment and ongoing monitoring of AI models so that it ensures the models are developed in compliance with regulatory and ethical standards,” says JPMorgan Chase managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance, Stephanie Zhang. Because AI models are driven by data and environment changes, says Zhang, continuous compliance is necessary to ensure that AI deployments meet regulatory requirements and establish clear ownership and accountability. Amidst these vigilant governance efforts to safeguard AI and ML, enterprises can encourage innovation by creating well-defined metrics to monitor AI models, employing widespread education, encouraging all stakeholders’ involvement in AI/ML development, and building integrated systems. “The key is to establish a culture of responsibility and accountability so that everyone involved in the process understands the importance of this responsible behavior in producing AI solutions and be held accountable for their actions,” says Zhang. This episode of Business Lab is produced in association with JPMorgan Chase. Full Transcript Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is building and deploying artificial intelligence with a focus on ModelOps, governance, and building transparent and fair systems. As AI becomes more complicated, but also integrated into our daily lives, the need to balance governance and innovation is a priority for enterprises. Two words for you: good governance. Today we are talking with Stephanie Zhang, managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance at JPMorgan Chase. This podcast is produced in association with JPMorgan Chase. Welcome Stephanie. Stephanie Zhang: Thank you for having me, Laurel. Laurel: Glad to have you here. So, often people think of artificial intelligence as individual technologies or innovations, but could you describe the ecosystem of AI and how it can actually help different parts of the business? Stephanie: Sure. I’ll start explaining what AI is first. Artificial intelligence is the ability for a computer to think and learn. With AI, computers can do things that traditionally require human intelligence. AI can process large amounts of data in ways that humans cannot. The goal for AI is to be able to do things like recognizing patterns, making decisions, and judging like humans. And AI is not just a single technology or innovation, but rather an ecosystem of different technologies, tools and techniques that work all together to enable intelligent systems and applications. The AI ecosystem includes technology such as machine learning, natural language processing, computer vision, robotics and cognitive computing among others. And finally, software. The business software that makes the decisions based on the predictive answers out of the models. Laurel: That’s a really great way to set the context for using AI in the enterprise. So how does artificial intelligence help JPMorgan Chase build better products and services? Stephanie: At JPMorgan Chase, our purpose is to make dreams possible for everyone, everywhere and every day. So we aim to be the most respected financial services firm in the world, serving corporations and individuals with exceptional client service, operational excellence, a commitment to integrity, fairness, responsibility, and we’re a great place to work with a winning culture. Now, all of these things I have mentioned from the previous questions that you have asked, AI can contribute towards that. So specifically, well first of all, AI actually is involved in making better products and services from the back office to the front customer-facing applications. There’s some examples here. For example, I mentioned earlier improved customer experience. So we use AI to personalize customer experience. Second part is streamlined operations. So, behind the scenes a lot of the AI applications are in the spaces of streamlining our operations, and those range from client onboarding documents, training our AI assisted agent and helping us internally training all of those things. Third, fraud detection and prevention. As a financial services company, we have a lot of folks in the office, and that helps in terms of cybersecurity, in terms of credit card fraud detection and prevention, many of which are done through a large amount of data analyzed detecting anomaly situations. And then last but not least, trading and investment. It helps our investment managers by providing more information, bringing information in an efficient manner, and helps recommend certain information and things to look at. Compliance as well. AI power tools can also help financial services firms such as ours to comply with regulatory requirements by automating these compliance processes. Laurel: That’s a great explanation, Stephanie. So more specifically, what is ModelOps and how is it used with AI and then to help the firm innovate? Stephanie: ModelOps is a set of best practices and tools used to manage the overall lifecycle of AI and machine learning models in the production environment. Specifically, it’s focused more on the governance side of things, but from an end-to-end lifecycle management, both from the very beginning of when you wanted to approach an AI/ML project, the intention of the project and outcome that you desire to have to the model development, to how you process the data, to how you deploy the model, and ongoing monitoring of the model to see if the model’s performance is still as intended. It’s a structured approach to managing the entire lifecycle of AI models. Laurel: There’s certainly quite a bit to consider here. So specifically, how does that governance that you mentioned earlier play into the development of artificial intelligence across JPMorgan Chase and the tools and services being built? Stephanie: So, the governance program that we are developing surrounding AI/ML not only ensures that the AI/ML models are developed in a responsible manner, in a trustworthy manner, but also increases efficiency and innovation in this space. The effective governance would ensure that the models are developed in the right way and deployed in the responsible way as well. Specifically, it involves establishing the right policies and procedures and controls for the development, testing, deployment and ongoing monitoring of AI models so that it ensures the models are developed in compliance with regulatory and ethical standards and also how we handle data. And then on top of that, continuously monitored and updated to reflect the changes in the environment. Laurel: So as a subset of governance, what role does continuous compliance then play in the process of governance? Stephanie: Continuous compliance is an important part of governance in the deployment of AI models. It involves ongoing monitoring and validation of AI models to ensure that they’re compliant with regulatory and ethical standards as well as use case objectives in the organization’s internal policies and procedures. We all know that AI model development is not like code software development where if you don’t change the code, nothing really changes, but AI models are driven by data. So the data and environment change, it requires us to constantly monitor the model’s performance to ensure the model is not drifting out of what we intended. So the continuous compliance requires that AI models are constantly being monitored and updated to reflect the changes that we observe in the environment to ensure that it still complies to the regulatory requirements. As we know, more and more regulatory rules are coming across the world in the space of using data and using AI. And this can be achieved through model monitoring tools, capturing data in real time, providing the alert when the model is out of compliance, and then alerting the developers to do the changes as it requires. But one of the other important things is not detecting the changes through the monitoring, but also to establish clear ownership and accountability on the compliance. And this can be gone through the established responsibility matrix with governance or oversight boards that’s constantly reviewing these models. And it involves also independent validation of how the model is built and how the model is deployed. So in summary, continuous compliance plays a really important role in the governance of AI models. Laurel: That’s great. Thank you for that detailed explanation. So because you personally specialize in governance, how can enterprises balance both providing safeguards for artificial intelligence and machine learning deployment, but still encourage innovation? Stephanie: So balancing safeguards for AI/ML deployment and encouraging innovation can be really challenging tasks for the enterprises. It’s large scale, and it’s changing extremely fast. However, this is critically important to have that balance. Otherwise, what is the point of having the innovation here? There are a few key strategies that can help achieve this balance. Number one, establish clear governance policies and procedures, review and update existing policies where it may not suit AI/ML development and deployment at new policies and procedures that’s needed, such as monitoring and continuous compliance as I mentioned earlier. Second, involve all the stakeholders in the AI/ML development process. We start from data engineers, the business, the data scientists, also ML engineers who deploy the models in production. Model reviewers. Business stakeholders and risk organizations. And that’s what we are focusing on. We’re building integrated systems that provide transparency, automation and good user experience from beginning to end. So all of this will help with streamlining the process and bringing everyone together. Third, we needed to build systems not only allowing this overall workflow, but also captures the data that enables automation. Oftentimes many of the activities happening in the ML lifecycle process are done through different tools because they reside from different groups and departments. And that results in participants manually sharing information, reviewing, and signing off. So having an integrated system is critical. Four, monitoring and evaluating the performance of AI/ML models, as I mentioned earlier on, is really important because if we don’t monitor the models, it will actually have a negative effect from its original intent. And doing this manually will stifle innovation. Model deployment requires automation, so having that is key in order to allow your models to be developed and deployed in the production environment, actually operating. It’s reproducible, it’s operating in production. It’s very, very important. And having well-defined metrics to monitor the models, and that involves infrastructure model performance itself as well as data. Finally, providing training and education, because it’s a group sport, everyone comes from different backgrounds and plays a different role. Having that cross understanding of the entire lifecycle process is really important. And having the education of understanding what is the right data to use and are we using the data correctly for the use cases will prevent us from much later on rejection of the model deployment. So, all of these I think are key to balance out the governance and innovation. Laurel: So there’s another topic here to be discussed, and you touched on it in your answer, which was, how does everyone understand the AI process? Could you describe the role of transparency in the AI/ML lifecycle from creation to governance to implementation? Stephanie: Sure. So AI/ML, it’s still fairly new, it’s still evolving, but in general, people have settled in a high-level process flow that is defining the business problem, acquiring the data and processing the data to solve the problem, and then build the model, which is model development and then model deployment. But prior to the deployment, we do a review in our company to ensure the models are developed according to the right responsible AI principles, and then ongoing monitoring. When people talk about the role of transparency, it’s about not only the ability to capture all the metadata artifacts across the entire lifecycle, the lifecycle events, all this metadata needs to be transparent with the timestamp so that people can know what happened. And that’s how we shared the information. And having this transparency is so important because it builds trust, it ensures fairness. We need to make sure that the right data is used, and it facilitates explainability. There’s this thing about models that needs to be explained. How does it make decisions? And then it helps support the ongoing monitoring, and it can be done in different means. The one thing that we stress very much from the beginning is understanding what is the AI initiative’s goals, the use case goal, and what is the intended data use? We review that. How did you process the data? What’s the data lineage and the transformation process? What algorithms are being used, and what are the ensemble algorithms that are being used? And the model specification needs to be documented and spelled out. What is the limitation of when the model should be used and when it should not be used? Explainability, auditability, can we actually track how this model is produced all the way through the model lineage itself? And also, technology specifics such as infrastructure, the containers in which it’s involved, because this actually impacts the model performance, where it’s deployed, which business application is actually consuming the output prediction out of the model, and who can access the decisions from the model. So, all of these are part of the transparency subject. Laurel: Yeah, that’s quite extensive. So considering that AI is a fast-changing field with many emerging tech technologies like generative AI, how do teams at JPMorgan Chase keep abreast of these new inventions while then also choosing when and where to deploy them? Stephanie: The speed of innovation in the technology field is just growing so exponentially fast. Of course, AI technology is still emerging, and it is truly a challenging task. However, there’s a few things that we can do, and we are doing, to help the teams to keep abreast of these new innovations. One, we build a strong internal knowledge base. We have a lot of talent in JPMorgan Chase, and the team will continue to build their knowledge base and different teams evaluate different technologies, and they share their minds. And we attend conferences, webinars, and industry events, so that’s really important. Second, we engage with industry experts, thought leaders and vendors. Oftentimes, startups have the brightest ideas as to what to do with the latest technology? And we also are very much involved in educational institutes and researchers as well. Those help us learn about the newest developments in the field. And then the third thing is that we do a lot of pilot project POCs [proof of concepts]. We have hackathons in the firm. And so JPMorgan Chase is a place where employees and everyone from all roles are encouraged to come up with innovative ideas. And the fourth thing is we have a lot of cross-functioning teams that collaborate. Collaboration is where innovation truly emerges. That’s where new ideas and new ways of approaching solving an existing problem happen, and different minds start thinking about problems from different angles. So those are all the amazing things that we benefit from each other. Laurel: So this is a really great conversation because although you’re saying technology is obviously at the crux of what you do, people also play a large part in developing and deploying AI and ML models. So, then how do you go about ensuring people that develop the models and manage the data operate responsibly? Stephanie: This is a topic I’m very passionate about because first and foremost, I think having a diverse team is always the winning strategy. And particularly in an AI/ML world, we are using data to solve problems and understanding that bias and being conscious about those things so getting in the trap of unintentionally using data in the wrong way is important. So, what that means is that there are several ways to promote responsible behaviors because models are built by people. One, we do establish clear policies and guidelines. Financial services firms tend to have strong risk management. So, we’re very strong in that sense. However, with the emerging field of AI/ML, we are increasing that type of policies and guidelines. And, two, very important is providing training and education. Oftentimes as a data scientist, people are more focused on technology. They’re focused on building a model with the best performing scores, the best accuracy, and perhaps are not so well versed in terms of, am I using the right data? Should I be using this data? All of those things, we need to have continued education on that so that people know how to build models responsibly. Then we wanted to foster a culture of responsibility. And within JPMorgan Chase, there’s various groups that have already spawned up to talk about this. Responsible AI, ethical AI are major topics here in our firm. And data privacy, ethics, all of these are topics not only in our training classes as well as in various employee groups. Ensuring transparency. So, this is where the transparency is important. If people don’t know what they’re doing and having a different group be able to monitor and review the models being produced, they may not learn what is the right way of doing it. The key is to establish a culture of responsibility and accountability so that everyone involved in the process understands the importance of this responsible behavior in producing AI solutions and be held accountable for their actions. Laurel: So, a quick followup to that important people aspect of artificial intelligence. What are some best practices JPMorgan Chase employs to ensure that diversity is being taken into account when both hiring new employees as well as building and then deploying those AI models? Stephanie: So, JPMorgan Chase is present in over a hundred markets around the globe, right? We’re actively seeking out diverse candidates throughout the world, and 49% of our global hires are women. And 58% of the new US hires are ethnically diverse. So we have been at the forefront and continue to hire diversely. So, ensuring diverse hiring practices is very important. Second, we need to create diverse teams as well. So diverse teams, that includes individuals with diverse backgrounds from diverse fields, not just computer science and AI/ML, but sociology, other fields are also important, and they all bring rich perspectives and creative problem-solving techniques. And the other thing, again, I’m going back to this, which is monitoring and auditing AI models for bias. So, not all the AI models require bias monitoring. We tier the models depending on the use of the models, those do need to get evaluated for it, and, therefore, it’s very, very important to follow the risk management framework and identify potential issues before they become significant problems. And then ensuring the bias in data and bias in terms of the model development are being detected and through sufficient amounts of test. And, finally, fostering a culture of inclusivity. So, creating a culture of inclusivity that values diversity and encourages different perspectives can help how we develop the models. So, we hire diverse candidates, we form teams that are diverse, but also we need to constantly reinforce this culture of DEI. So that includes establishing training programs, promoting communication amongst the communities of AI/ML folks. We talk about how we produce models and how we develop models, and what are those things that we should be looking out for. So, promoting diversity and inclusion in the development and the deployment of AI models requires ongoing effort and continuous improvement, and it’s really important to ensure that diverse viewpoints are represented throughout the whole process. Laurel: This has been a really great discussion, Stephanie, but one last question. Much of this technology seems to be emerging so quickly, but how do you envision the future of ModelOps in the next five years? Stephanie: So, over the last few years, the industry has matured from model development to full AI lifecycle management, and now we see technology has evolved from ML platform towards the AI ecosystem from just making ML work to responsible AI. So, in the near future, what I see for ModelsOps is expected to continue to evolve and become more and more sophisticated as organizations increasingly adopt AI and machine learning technology. And several of the key trends that I’ve seen that’s likely to shape the future of ModelOps include increased automation. As the volume and complexity of AI models continue to grow, automation will become increasingly important in managing the entire model lifecycle. We just can’t catch up if we don’t automate. So from development to deployment and monitoring, this requires a development of much more advanced tools and platforms that can automate many of the tasks currently mostly still performed by human operators. Second thing is a greater focus on explainability and interpretability. As AI models become more complex and are used to make more important decisions, there will be increased focus on ensuring that models are explainable and interpretable so that the stakeholders can understand how decisions are made. This will require the development of new techniques and tools for model interpretability. Third, integration with devOps. As I mentioned earlier, just making model ML work is no longer enough. Many models being trained are now getting into the production environment. So ModelOps will continue to integrate with devOps enabling the organization to manage both the software and AI models in a very unified manner. And this will require the development of new tools and platforms to enable the seamless integration from the AI model development and deployment with the software development and deployment. And then the increased use of cloud-based services. As more organizations move their operations to the cloud, there will be increased use of cloud-based services for AI model development and deployment. And this will require new tools, again, to integrate seamlessly with cloud-based infrastructure. So the future of ModelOps is likely to be definitely more automation, increased focus on the explainability and interpretability and tighter integration with devOps and increased use of cloud. Laurel: Well, thank you very much, Stephanie, for what has been a fantastic episode of the Business Lab. Stephanie: My pleasure. Thank you for having me. Laurel: That was Stephanie Zhang, the managing director and general manager of ModelOps, AI and ML lifecycle management and governance at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.
July 18, 2023
The emergence of consumer-facing generative AI tools in late 2022 and early 2023 radically shifted public conversation around the power and potential of AI. Though generative AI had been making waves among experts since the introduction of GPT-2 in 2019, it is just now that its revolutionary opportunities have become clear to enterprise. The weight of this moment—and the ripple effects it will inspire—will reverberate for decades to come. The impact of generative AI on economies and enterprise will be revolutionary. McKinsey Global Institute that generative AI will add between $2.6 and $4.4 trillion in annual value to the global economy, increasing the economic impact of AI as a whole by 15 to 40%. The consultancy projects that AI will automate half of all work between 2040 and 2060, with generative AI pushing that window a decade earlier than previous estimates. Goldman Sachs a 7%—or nearly $7 trillion—increase in global GDP attributable to generative AI, and the firm expects that two-thirds of U.S. occupations will be affected by AI-powered automation.Text-generating AI systems, such as the popular ChatGPT, are built on large language models (LLMs). LLMs train on a vast corpus of data to answer questions or perform tasks based on statistical likelihoods. Rather than searching and synthesizing answers, they use mathematical models to the most likely next word or output. “What was exciting to me, when I first interacted with ChatGPT, was how conversant it was,” says Michael Carbin, associate professor at MIT and founding advisor at MosaicML. “I felt like, for the first time, I could communicate with a computer and it could interpret what I meant. We can now translate language into something that a machine can understand. I can’t think of anything that’s been more powerful since the desktop computer.” Although AI was recognized as strategically important before generative AI became prominent, our 2022 survey found CIOs’ ambitions limited: while 94% of organizations were using AI in some way, only 14% were aiming to achieve “enterprise-wide” AI by 2025. By contrast, the power of generative AI tools to democratize AI—to spread it through every function of the enterprise, to support every employee, and to engage every customer —heralds an inflection point where AI can grow from a technology employed for particular use cases to one that truly defines the modern enterprise. As such, chief information officers and technical leaders will have to act decisively: embracing generative AI to seize its opportunities and avoid ceding competitive ground, while also making strategic decisions about data infrastructure, model ownership, workforce structure, and AI governance that will have long-term consequences for organizational success.This report explores the latest thinking of chief information officers at some of the world’s largest and best-known companies, as well as experts from the public, private, and academic sectors. It presents their thoughts about AI against the backdrop of our of 600 senior data and technology executives. Key findings include the following: • A trove of unstructured and buried data is now legible, unlocking business value. Previous AI initiatives had to focus on use cases where structured data was ready and abundant; the complexity of collecting, annotating, and synthesizing heterogeneous datasets made wider AI initiatives unviable. By contrast, generative AI’s new ability to surface and utilize once-hidden data will power extraordinary new advances across the organization.• The generative AI era requires a data infrastructure that is flexible, scalable, and efficient. To power these new initiatives, chief information officers and technical leads are embracing next-generation data infrastructures. More advanced approaches, such as data lakehouses, can democratize access to data and analytics, enhance security, and combine low-cost storage with high-performance querying. • Some organizations seek to leverage open-source technology to build their own LLMs, capitalizing on and protecting their own data and IP. CIOs are already cognizant of the limitations and risks of third-party services, including the release of sensitive intelligence and reliance on platforms they do not control or have visibility into. They also see opportunities around developing customized LLMs and realizing value from smaller models. The most successful organizations will strike the right strategic balance based on a careful calculation of risk, comparative advantage, and governance.• Automation anxiety should not be ignored, but dystopian forecasts are overblown. Generative AI tools can already complete complex and varied workloads, but CIOs and academics interviewed for this report do not expect large-scale automation threats. Instead, they believe the broader workforce will be liberated from time-consuming work to focus on higher value areas of insight, strategy, and business value.• Unified and consistent governance are the rails on which AI can speed forward. Generative AI brings commercial and societal risks, including protecting commercially sensitive IP, copyright infringement, unreliable or unexplainable results, and toxic content. To innovate quickly without breaking things or getting ahead of regulatory changes, diligent CIOs must address the unique governance challenges of generative AI, investing in technology, processes, and institutional structures. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
July 18, 2023
It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US. Last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. Meanwhile, artists, authors, and the image company Getty are suing AI companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their models on their work without providing any recognition or payment. If these cases prove successful, they could force OpenAI, Meta, Microsoft, and others to change the way AI is built, trained, and deployed so that it is more fair and equitable. They could also create new ways for artists, authors, and others to be compensated for having their work used as training data for AI models, through a system of licensing and royalties. The generative AI boom has for passing AI-specific laws. However, we’re unlikely to see any such legislation pass in the next year, given the split Congress and intense lobbying from tech companies, says Ben Winters, senior counsel at the Electronic Privacy Information Center. Even the most prominent attempt to create new AI rules, Senator Chuck Schumer’s , does not include any specific policy proposals. “It seems like the more straightforward path [toward an AI rulebook is] to start with the existing laws on the books,” says Sarah Myers West, the managing director of the AI Now Institute, a research group. And that means lawsuits. Lawsuits left, right, and center Existing laws have provided plenty of ammunition for those who say their rights have been harmed by AI companies. In the past year, those companies have been hit by a wave of lawsuits, most recently from the comedian and author Sarah Silverman, who claims that OpenAI and Meta scraped her copyrighted material illegally off the internet to train their models. Her claims are similar to those of artists in another class action alleging that popular image-generation AI software used their copyrighted images without consent. Microsoft, OpenAI, and GitHub’s AI-assisted programming tool Copilot are also facing a class action claiming that it relies on “software piracy on an unprecedented scale” because it’s trained on existing programming code scraped from websites. Meanwhile, the FTC is investigating whether OpenAI’s data security and privacy practices are unfair and deceptive, and whether the company caused harm, including reputational harm, to consumers when it trained its AI models. It has real evidence to back up its concerns: OpenAI had a security breach earlier this year after a bug in the system caused users’ chat history and payment information to be leaked. And AI language models often spew inaccurate and made-up content, sometimes about people. OpenAI is bullish about the FTC investigation—at least in public. When contacted for comment, the company shared a from CEO Sam Altman in which he said the company is “confident we follow the law.” An agency like the FTC can take companies to court, enforce standards against the industry, and introduce better business practices, says Marc Rotenberg, the president and founder of the Center for AI and Digital Policy (CAIDP), a nonprofit. CAIDP filed a complaint to the FTC in March asking it to investigate OpenAI. The agency has the power to effectively create new guardrails that tell AI companies what they are and aren’t allowed to do, says Myers West. The FTC could require OpenAI to pay fines or delete any data that has been illegally obtained, and to delete the algorithms that used the illegally collected data, Rotenberg says. In the most extreme case, ChatGPT could be taken offline. There is precedent for this: the agency made the diet company Weight Watchers delete its data and algorithms in 2022 after illegally collecting children’s data. Other government enforcement agencies may very well start their own investigations too. The Consumer Financial Protection Bureau has it is looking into the use of AI chatbots in banking, for example. And if generative AI plays a decisive role in the upcoming 2024 US presidential election, the Federal Election Commission could also investigate, says Winters. In the meantime, we should start to see the results of lawsuits trickle in, although it could take at least a couple of years before the class actions and the FTC investigation go to court. Many of the lawsuits that have been filed this year will be dismissed by a judge as being too broad, reckons Mehtab Khan, a resident fellow at Yale Law School, who specializes in intellectual property, data governance, and AI ethics. But they still serve an important purpose. Lawyers are casting a wide net and seeing what sticks. This allows for more precise court cases that could lead companies to change the way they build and use their AI models down the line, she adds. The lawsuits could also force companies to improve their data documentation practices, says Khan. At the moment, tech companies have a very rudimentary idea of what data goes into their AI models. Better documentation of how they have collected and used data might expose any illegal practices, but it might also help them defend themselves in court. History repeats itself It’s not unusual for lawsuits to yield results before other forms of regulation kick in—in fact, that’s exactly how the US has handled new technologies in the past, says Khan. Its approach differs from that of other Western countries. While the EU is trying to prevent the worst AI harms proactively, the American approach is more reactive. The US waits for harms to emerge first before regulating, says Amir Ghavi, a partner at the law firm Fried Frank. Ghaviis representing Stability AI, the company behind the open-source image-generating AI Stable Diffusion, in three copyright lawsuits. “That’s a pro-capitalist stance,” Ghavi says. “It fosters innovation. It gives creators and inventors the freedom to be a bit more bold in imagining new solutions.” The class action lawsuits over copyright and privacy could shed more light on how “black box” AI algorithms work and create new ways for artists and authors to be compensated for having their work used in AI models, say Joseph Saveri, the founder of an antitrust and class action law firm, and Matthew Butterick, a lawyer. They are leading the suits against GitHub and Microsoft, OpenAI, Stability AI, and Meta. Saveri and Butterick represent Silverman, part of a group of authors who claim that the tech companies trained their language models on their copyrighted books. Generative AI models are trained using vast data sets of images and text scraped from the internet. This inevitably includes copyrighted data. Authors, artists, and programmers say tech companies that have scraped their intellectual property without consent or attribution should compensate them. “There’s a void where there’s no rule of law yet, and we’re bringing the law where it needs to go,” says Butterick. While the AI technologies at issue in the suits may be new, the legal questions around them are not, and the team is relying on “good old fashioned” copyright law, he adds. Butterick and Saveri point to Napster, the peer-to-peer music sharing system, as an example. The company was sued by record companies for copyright infringement, and it led to a landmark case on the fair use of music. The Napster settlement cleared the way for companies like Apple, Spotify, and others to start creating new license-based deals, says Butterick. The pair is hoping their lawsuits, too, will clear the way for a licensing solution where artists, writers, and other copyright holders could also be paid royalties for having their content used in an AI model, similar to the system in place in the music industry for sampling songs. Companies would also have to ask for explicit permission to use copyrighted content in training sets. Tech companies have treated publicly available copyrighted data on the internet as subject to “fair use” under US copyright law, which would allow them to use it without asking for permission first. Copyright holders disagree. The class actions will likely determine who is right, says Ghavi. This is just the beginning of a new boom time for tech lawyers. The experts MIT Technology Review spoke to agreed that tech companies are also likely to face litigation over privacy and biometric data, such as images of people’s faces or clips of them speaking. Prisma Labs, the company behind the popular AI avatar program Lensa, is already facing a over the way it’s collected users’ biometric data. Ben Winters believes we will also see more lawsuits around product liability and Section 230, which would determine whether AI companies are responsible if their products go awry and whether they should be liable for the content their AI models produce. “The litigation processes can be a blunt object for social change but, nonetheless, can be quite effective,” says Saveri. “And no one’s lobbying Matthew [Butterick] or me.”
July 17, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This company plans to transplant pig hearts into babies next year A biotech company called eGenesis is experimenting with transplanting the hearts of young gene-edited pigs into baby baboons as part of a study that could pave the way for similar transplants in human babies. It hopes to transplant pig hearts into babies with serious heart defects as early as next year, in a bid to buy them more time to wait for a human heart.The company has developed a technique that uses the gene-editing tool CRISPR to make around 70 edits to a pig’s genome. In theory, these edits should allow the organs to be successfully transplanted into people.The practice is proving more difficult. The team is planning to test with 12 infant baboons, but of the two surgeries that have been performed so far, neither animal survived beyond a matter of days. Still the company, and others in the field, remain optimistic. . —Jessica Hamzelou How tech companies got access to our tax data You might think (or at least hope) that sensitive data like your tax returns would be kept under close care. But we learned last week that tax prep companies have been sharing millions of taxpayers’ sensitive personal information with Meta and Google, some for over a decade. The tax companies shared the data through tracking pixels, which are used for advertising purposes, an investigative congressional report revealed on Wednesday. Many of them say they have removed the pixels, but it’s not clear whether some sensitive data is still being held by the tech companies. The findings expose the significant privacy risks that advertising and data sharing pose—and it’s possible that regulators might actually do something about it. . —Tate Ryan-Mosley This story is from The Technocrat, Tate’s weekly newsletter covering power in Silicon Valley. to receive it in your inbox every Friday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Inside the fightback against AI Writers and artists fed up with AI companies scraping their data are starting to mobilize. ( $)+ Europe wants US lawmakers to act more quickly over regulating AI. ( $)+ Five big takeaways from Europe’s AI Act. ()+ AI’s economic impact will extend far beyond chatbots. ( $) 2 Inside Threads’ plan to force Twitter to unravel Six months’ intensive work, fueled by a grudge. ( $)+ Twitter is teetering on the precipice. ( $) 3 A typo is behind the leak of sensitive US emails to MaliSenders keep mistyping email addresses, exposing highly classified messages. ( $) 4 Israel is using AI to select air strike targetsDespite serious questions about accuracy. ( $)+ The UN Security Council is meeting this week over its AI fears. ()+ Why business is booming for military AI startups. () 5 Tesla’s cybertruck is finally going into productionIt’s been a long four-year wait. ( $)+ EVs are helping to keep homes running during power cuts. ( $) 6 Ukraine’s scientists are slowly trying to rebuild their institutionsBut the nation’s budget is still being funneled into defense. () 7 This fusion reactor relies on superconducting tape10,000 kilometers of it, in fact. ()+ A hole in the ground could be the future of fusion power. () 8 Weather apps are more popular than ever Some people check them as often as they would their social media. () 9 Domino’s has conceded defeat to Uber EatsIt can no longer ignore the stranglehold delivery apps have over takeout. ( $) 10 The case for logging your mood It can help people to understand the factors that affect their wellbeing. ( $) Quote of the day “We built it, we trained it, but we don’t know what it’s doing.” —Sam Bowman, an AI professor at New York University, explains to that AI companies don’t understand exactly how the tools they created work. The big story Humanity is stuck in short-term thinking. Here’s how we escape. October 2020 Humans have evolved over millennia to grasp an ever-expanding sense of time. We have minds capable of imagining a future far off into the distance. Yet while we may have this ability, it is rarely deployed in daily life. If our descendants were to diagnose the ills of 21st-century civilization, they would observe a dangerous short-termism: a collective failure to escape the present moment and look further ahead. The world is saturated in information, and standards of living have never been higher, but so often it’s a struggle to see beyond the next news cycle, political term, or business quarter. How to explain this contradiction? Why have we come to be so stuck in the “now”? . —Richard Fisher We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Yes, it’s AI wizardry, but is extremely entertaining.+ Talking of Barbie, these are truly getting into the spirit of things.+ Yikes, don’t mess with !+ Why we just can’t get enough of .+ I challenge you to find a better dancer than .
July 17, 2023
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, . You might think (or at least hope) that sensitive data like your tax returns would be kept under close care. But we learned this week that tax prep companies have been sharing millions of taxpayers’ sensitive personal information with Meta and Google, some for over a decade. The tax companies shared the data through tracking pixels, which are used for advertising purposes, an revealed on Wednesday. Many of them say they have removed the pixels, but it’s not clear whether some sensitive . The findings expose the significant privacy risks that advertising and data sharing pose, and it’s possible that regulators might actually do something about it. What’s the story? In November 2022, the Markup published an including TaxAct, TaxSlayer, and H&R block. It found that the sites were sending data to Meta through Meta Pixel, a commonly used piece of computer code often embedded in websites to track users. The story prompted a congressional probe into the data practices of tax companies, and that report, published Wednesday, showed that things were much worse than even the Markup’s bombshell reporting suggested. The tech companies had —like millions of peoples’ incomes, the size of their tax refunds, and even their enrollment status in government programs—dating back as early as 2011. Meta said it used the data to target ads to users on its platforms and to train its AI programs. It seems Google did not use the information for its own commercial purposes as directly as Meta, though it’s unclear whether the company used the data elsewhere, an aide to Senator Elizabeth Warren . Experts say that both tax prep and tech companies could face significant legal consequences, including private lawsuits, challenges from the Federal Trade Commission, even criminal charges from the US federal government. What are tracking pixels? At the center of the controversy are tracking pixels: bits of code that many websites embed to learn more about user behavior. Some of the most commonly used pixels are made by Google, Meta, and Bing. Websites that use these pixels to collect information about their own users . The results can include information like where users click, what they type, and how long they scroll. Highly sensitive data can be gleaned from those sorts of activities. That data can be used according to what you might be interested in. Pixels allow websites to communicate with advertising services across websites and devices, so that an ad provider can learn about a user. They are different from cookies, which store information about you, your computer, and your behavior on each website you visit. So what are the risks? These tracking pixels are everywhere, and many ads served online are placed at their direction. They contribute to the dominant economic model of the internet, which encourages data collection in the interest of targeted advertising and . Often, users don’t know that websites they visit have pixels. In the past, privacy advocates have warned about pixels collecting , for example. “This ecosystem involves everything from first-party collectors of data, such as apps and websites, to all the embedded tracking tools and pixels, online ad exchanges, data brokers, and other tech elements that capture and transmit data about people, including sensitive data about health or finances, and often to third parties,” Justin Sherman, a senior fellow at Duke University’s Sanford School of Public Policy, wrote to me in an email. “The underlying thread is the same: consumers may be more aware of how much data a single website or app or platform gathers directly, but most are unaware about just how many other companies are operating behind the scenes to gather similar or even more data every time they go online.” (P.S. The Markup has a great explainer on how you can see what your company is sending to Meta through tracking pixels! ) What else I’m reading The FTC is taking on OpenAI, according to a document first published by the Washington Post on Thursday. The agency and is demanding records covering its security practices, AI training methods, and use of personal data. The investigation poses the first major regulatory challenge to OpenAI in the US, and I’ll be watching closely. Sam Altman, the CEO, doesn’t seem to be sweating too much, at least publicly. He that “we are confident we follow the law.” Speaking of the FTC, Commissioner Lina Khan, who has enthusiastically taken on Big Tech antitrust cases, was called in front of Congress this week. She from some Republican lawmakers for “harassing” businesses and pursuing antitrust suits that the agency has lost. Khan has had a tough go lately. The latest loss came on Tuesday, when a judge ruled against the agency’s attempt to prevent Microsoft’s $69 billion acquisition of gaming company Activision. I love , the Twitter clone put out by Meta, from the Atlantic’s Caroline Mimbs Nyce. She writes, “Many users may not be excited to be on Threads, exactly—it’s more that they’re afraid not to be.” I’ve resisted joining for now, but I certainly feel some FOMO. What I learned this week China is fighting back against US export restrictions on its computer chips and semiconductors, my in a piece published this week. At the beginning of July, China announced a new restriction on the export of gallium and germanium, two elements used in producing chips, solar panels, and fiber optics. Although the move itself won’t necessarily have a ton of impact, Zeyi writes that this might just be the start of Chinese countermeasures, which could include export restrictions on rare-earth elements or materials in electric-vehicle batteries, like lithium and cobalt. “Because these materials are used in much greater quantities, it’s more difficult to find a substitute supply in a short time. They are the real trump card China may hold at the future negotiation table.”
July 17, 2023
The baby baboon is wearing a mesh gown and appears to be sitting upright. “This little lady … looks pretty philosophical, I would say,” says Eli Katz, who is showing me the image over a Zoom call. This baboon is the first to receive a heart transplant from a young gene-edited pig as part of a study that should pave the way for similar transplants in human babies, says Katz, chief medical officer at the biotech company eGenesis. The company, based in Cambridge, Massachusetts, has developed a technique that uses the gene-editing tool CRISPR to make around 70 edits to a pig’s genome. These edits should allow the organs to be successfully transplanted into people, the team says. As soon as next year, eGenesis hopes to transplant pig hearts into babies with serious heart defects. The goal is to buy them more time to wait for a human heart. Before that happens, the team at eGenesis will practice on 12 infant baboons. Two such surgeries have been performed so far. Neither animal survived beyond a matter of days. But the company is optimistic, as are others in the field. Many recipients of the first liver transplants didn’t survive either—but thousands of people have since benefited from such transplants, says Robert Montgomery, director of the NYU Langone Transplant Institute, who has worked with rival company United Therapeutics. Babies born with heart conditions represent “a great population to be focusing on,” he says, “because so many of them die.” Editing risk Over 100,000 people in the US alone are waiting for an organ transplant. Every day,. Researchers are exploring multiple options, including the possibility of bioprinting organs or growing new ones inside people’s bodies. Transplanting animal organs is another potential alternative to help meet the need. The idea of using organs and tissues from animals, known as xenotransplantation, is an old one—the were performed back in the 17th century. More recent attempts were made in the 1960s, and again in the 1990s. Many of these used organs from monkeys and baboons. But toward the start of the 1990s, a consensus emerged that pigs were the best donor candidates, says Montgomery. Primates are precious—they are intelligent animals that experience complex emotions. Only a small number can be used for human research, and at any rate, they reproduce slowly. They are also more likely to be able to pass on harmful viruses. On the other hand, people already know a lot about how to rear and farm pigs, and their organs are about the right size for humans. But transferring organs between animals of different species isn’t straightforward. Even organs from another human can be rejected by a recipient’s immune system, and animal tissues have a lot more components that our immune systems will regard as “foreign.” This can cause the organ to be attacked by immune cells. There’s also the possibility of transferring a virus along with the organ, for example. Even if a donor animal isn’t infected, it will have “endogenous retroviruses”—genetic code for ancient viruses that have long since been incorporated into its DNA. These viruses don’t cause problems for their animal hosts. But there’s a chance they could cause an infection in another species. “There’s a risk that viruses that are endemic to animals evolve in a human and become deadly,” says Chris Gyngell, a bioethicist at Murdoch Children’s Research Institute in Melbourne, Australia. The team at eGenesis is using CRISPR to address this risk. “You can use CRISPR-Cas9 to inactivate the 50 to 70 copies of retrovirus in the genome,” says Mike Curtis, president and chief executive officer at eGenesis. The edits prevent retroviruses from being able to replicate, he says. Scientists at the company perform other gene edits, too. Several serve to “knock out” pig genes whose protein products trigger harmful immune responses in humans. And the team members insert seven human genes, which they believe should reduce the likelihood that the organ will be rejected by a human recipient’s immune system. In all, “we’re producing [organ] donors with over 70 edits,” says Curtis. The team performs these edits on pig fibroblasts—cells that are found in connective tissue. Then they take the DNA-containing nuclei of edited cells and put them into pig egg cells. Once an egg is fertilized with sperm, the resulting embryo is implanted into the uterus of an adult pig. Eventually, cloned piglets are delivered by C-section. “It’s the same technology that was used to clone Dolly back in the ’90s,” says Curtis, referring to the famous sheep that was the first animal cloned from an adult cell. eGenesis has around 400 cloned pigs housed at a research facility in the Midwest (he is reluctant to reveal the exact location because facilities have been targeted by animal rights protesters). And early last year, the company set up a “clean” facility to produce organs fit for humans. Anyone who enters has to shower and don protective gear to avoid bringing in any bugs that might infect the pigs. The 200 pigs currently at this center live in groups of 15 to 25, says Curtis: “It’s basically like a very clean barn. We control all the feed that comes in, and we have waste control and airflow control.” There’s no mud. The pigs that don’t end up having their organs used will be closely studied, says Curtis. The company needs to understand how the numerous gene edits they implement affect an animal over the course of its life. The team also wants to know if the human genes continue to be expressed over time. Some of the pigs are over four years old, says Curtis. “So far, it looks good,” he adds. eGenesis researchers collect cells from a pig donor EGENESIS Complications When it comes to organ transplants, size is important. Surgeons take care to match the size of a donor’s heart to that of the recipient. Baby baboons are small—only hearts taken from pigs aged one to two months old are suitable, says Curtis. Once they are transplanted, the hearts are expected to grow with the baboons. The first baboon to get a pig heart, which was just under a year old, died within a day of surgery. “It was a surgical complication,” says Curtis. The intravenous tube providing essential fluids to the baboon became blocked, he says. “The animal had to be euthanized.” A second baboon was operated on a few months later. The team encountered another surgical complication: this time, the surgeons couldn’t get the baboon’s blood vessels to stay attached to those in the pig’s organs. The baboon died nine days after the operation. In both cases, “the heart itself was beating well,” says Curtis. “So far, the first two are very encouraging from cardiac performance … the hearts look good.” The surgeons who performed the operations are confident they’ll be able to avoid the surgical complications in the future, he says. Tough decisions Once the baboon trial is completed, the team at eGenesis wants to offer the pig hearts to babies under the age of two who were born with severe heart conditions. Such children have limited treatment options—human hearts of the right size are few and far between, and some of the devices used to treat heart conditions in adults aren’t suitable for little children with small hearts. Curtis hopes the pig hearts could initially be used as a temporary measure for such children—essentially buying them more time to wait for a donated human heart. Once a potential recipient has been found, the company can seek approval for the surgery from the US Food and Drug Administration. Ethicists will point out that babies won’t be able to give informed consent for surgery. That decision will come down to their caregiver, who will likely be in a dire situation, says Syd Johnson, a bioethicist at Upstate Medical University in Syracuse, New York. “These are parents who are desperate for anything that might save their child’s life,” she says. But Gyngell thinks the focus should be on who has the most to gain from an experimental procedure like this. “The fact is that pediatric patients have a greater clinical need, because there are far fewer other options available to them,” he says. Montgomery, who is himself the recipient of a donated human heart, agrees. He says he supports eGenesis’s goals. “These babies that have congenital heart disease … have a 50% mortality rate,” he says. “It’s a flip of a coin whether that kid is going to live or not.” That reasoning doesn’t wash with Johnson. The procedure is risky, and a child whose immune system rejects the organ could suffer, she says: “One hundred percent of the patients who’ve been transplanted with an animal organ have died [soon after the procedure]—that’s just an inescapable fact.” David Bennett Sr., who was the first living person to receive a gene-edited pig heart, in 2022, . There are more risks when using organs from gene-edited animals, says Johnson. We still don’t know if these genetic modifications might affect human recipients, especially in the long term. “The desire to do something to save these babies [with heart conditions] is obviously very strong for everyone who is involved,” she says. “But we still need to be honest and transparent about what the risks are—and they are, to some extent, unknown.” Montgomery himself has transplanted gene-edited pig organs into adults who have been declared brain dead. Those organs—which include kidneys and, in unpublished work, hearts—were from pigs bred by the rival company Revivicor, which was acquired by United Therapeutics. The experiments ran for just two or three days, but Montgomery plans to run a similar experiment in individuals who will be studied for a month after the transplant. So far, he says, “we’ve got very good results.” He believes young children may be better candidates for pig organs than adults, because their immune systems are still developing and therefore might be less likely to reject the organ. “They may well have some level of tolerance,” he says. A third baboon is due to receive a pig heart in August. The company plans to perform at least one such operation a month until 12 animals have been operated on. The team members hope they’ll be able to fix the surgical issues and enable the baboons to live longer. Some other non-human primates that have received kidneys from the gene-edited pigs have already survived over a year, says Curtis. “When you’re pioneering something new, there’s a steep learning curve,” Montgomery says.
July 17, 2023
From securing a hybrid workforce to building pipelines for ever-increasing data streams and keeping multiple mission-critical systems up and running, the modern IT department faces numerous pressures. As director of IT for the packaged food company Conagra, Amit Khot is optimistic about the ways modern technology solutions and infrastructure can enable businesses to thrive and innovate. Khot describes the power of advanced data analytics to both improve a company’s understanding of its customers and to optimize its operations. The ability to combine internal company data with data collected from social media and at point of sale will enable savvy companies to recognize new patterns. These advanced analytics, he says, will go beyond answering standard questions about financials and historical performance to provide insight into more complex questions about customers’ thoughts and changing preferences. Meanwhile, these same data tools can also be used to fine-tune daily business operations, pinpointing issues with order fulfillment, improving long-range supply-and-demand forecasting, and digitizing manufacturing plant processes. Koht explains, “Planning is looking into the future, depending upon your past historical data, as to what your future demand and supply should look like. We have gone through a journey to modernize our planning platforms.” A modern enterprise resource planning (ERP) system is also a must for a distributed organization like Conagra. A single connected ERP system can manage and provide visibility into business processes that involve multiple divisions or departments. By doing so, a modern ERP can also ease highly complex processes, such as the technology integration of a newly acquired company. Says Khot, “having a single view of finance, having a single view of the supply chain as early and as fast as possible, is one of the most important things that can help us get synergies out of the business as fast as possible.” This episode of Business Lab is produced in partnership with Infosys Cobalt. FULL TRANSCRIPT Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is technological evolution. Companies, whether they’re regional or global, startup or legacy, need to be able to quickly deploy technologies as markets and supply chains shift and change. While many worries may keep executives up at night, building modern systems, and adopting the right technologies to better understand data will help those executives and companies gain efficiencies and provide an excellent customer experience. Two words for you: meeting demand. My guest is Amit Khot, Director of IT for Conagra. Welcome, Amit.This episode of Business Lab is produced in partnership with Infosys Cobalt. Amit Khot: Hello, nice to meet you. Laurel: Great to have you here. So just a little background in case folks aren’t familiar with Conagra as a consumer packaged goods company that has been around for more than 100 years. Conagra produces products like Birds Eye, Healthy Choice, and Slim Jim; various foods that you can find in supermarkets and in restaurants around the world. You have been with Conagra now for 23 years. How has your role evolved as a company and technology has transformed? Amit: Absolutely. I started with Conagra, as you said, 23 years ago and I started as a program analyst with the company. Again, 23 years ago is a long time. Program analyst starting from that point and then I evolved into implementing our SAP ecosystem. That is what I started doing in the early 2000s. As time evolved, in the early 2010s, we started with a lot of mergers and acquisitions of businesses similar to ours. And as we started doing that, I played a role in doing due diligence for those businesses from an IT perspective. In addition to that, I also then helped integrate those companies within Conagra or Conagra Enterprise. In 2015 or so, we then did some divestitures and spins during that time, and I played a role in doing our program rating, an entire spinoff that we did for one of our major potato businesses, and I played the role of program director for that. During the same time, what we did is we went through an SG&A [selling, general, and administrative expense] reduction program, and I worked with some of our consulting businesses to come up with an analysis to say how much we spend on our production support. And that is the time when we actually contracted with Infosys to help us do the production support as part of aligning with the rest of the industry, where most of the industries were getting production support done by an outsourcing partner. I helped with that and right after that happened I had an opportunity to lead our SAP and integration platform. I did that and finally I ended up being in a role that I’m in right now, where I do enterprise architecture for applications on various value streams. That includes supply chain, manufacturing, finance, our global business systems, as well as platforms and integration. That’s my role currently, and that has been my journey for the last 23 years. A long time. Laurel: Well, certainly a long history of the company and how it evolved as well. But more recently, what kind of digital transformation has Conagra gone through in these recent years, and how do you approach these shifts and changes from an IT perspective? Amit: I think that’s a great question. Digital transformation has many meanings. I mean [sometimes], you do something which is really transformative. In other cases, you keep up with modernizing your technologies. One of the major initiatives that I helped design and lead initially was our S4 implementation. I helped with coming up with the design for our S4, which is what we call our ERP [enterprise resource planning] modernization program. And now I help that program with providing subject matter expertise across the various aspects of how a modern ERP should look. That is one of the programs that we are going through right now. One of the other transformation journeys that, as a company, we have gone through is planning transformation. Planning is looking into the future, depending upon your past historical data, as to what your future demand and supply should look like. We have gone through a journey to modernize our planning platforms. It’s one of the other things that we have done. In addition to that, we are currently marching on a journey now to modernize and digitize our manufacturing. A large initiative. You might know that we have multiple plants and manufacturing locations and co-packers. Digitizing can ensure that we get the most efficiencies out of our platforms. So that is underway. And last but not the least, I will say that we have started getting pretty good maturity and understanding on various cloud services or cloud platforms in general. And as such, we have started maturing in cloud platforms like Azure services or SAP’s BTP [Business Technology Platform] and such. Those are some of the key initiatives that we have gone through to digitally transform our business and there are a lot of things that we plan on doing in the future. Laurel: I think it was particularly important that you mentioned that your role encompasses so many different parts of the company, because supply chain is certainly one of those important ones, and you have to think of systems from end to end. So how did the covid-19 pandemic affect Conagra as people worked from home and started to shop online and do that more and more? Did this shift intensify adoption of specific technologies within the company? Amit: What we didn’t do is we didn’t create a different strategy just to attack the covid pandemic. We had a lot of strategies built. I think one of the most important things that we had to do during the covid pandemic was keeping the system stable. Keeping the system stable is not a trivial task. I mean, if you look at our application and platform portfolios, it’s pretty large. Keeping everything up and running so that we can actually fulfill the customer’s demand is a big deal, and to keep them going, I think that was one of the most important things that we did during the pandemic. The next thing I would say is that we were premature in using our collaborative platform and collaboration technologies, like Office 365 and Webex, even before covid hit us. I would say that with the pandemic and everybody going remote, one of the most important things that we did is that we added more resiliency to some of those platforms. And our usage of that platform spiked to such an extent that that was almost a call that we failed. I mean, how much collaboration people did during that time using the technology. There were a lot of times before that, where people used to be in the rooms and in conference rooms doing whiteboarding and such, and delivering projects, being in one place, but with covid, I think people were leveraging a lot of these collaborative technologies to be able to get there. I would say the third thing that we did is, covid opened our eyes to how we changed our way of working. Before, we used to be delivering a lot of our solutions using waterfall methodologies. They used to be very long and they used to take up a lot of time, and we would not be able to figure out until the end whether we are going to be succeeding with some of the projects or not. We then adopted continuous delivery as a way to deliver work. And that spiked up the use of tools like JIRA quite a bit. But that was started during the covid time and we continue to use that more and more. And lastly, I would say that we had to do analysis on our data to figure out how we can, as I said before, how do we keep our system stable. But then also analyzing how do we fulfill the demand, and, as such, what are some of our pain points? And we used some of the cloud platforms and cloud services to do some quick analysis to be able to fulfill our shipments. Those are the few things that we did and learned and adopted during the covid-19 pandemic. Laurel: And that’s certainly important to be able to actually see that data in real-time to help your customers. How do you think adopting cloud and using more data technologies will help your customer experience improve? Amit: I think one of the most important things in our business is to have a 360-degree view of customers. I mean, it’s pretty vital, right? As you might know, our business is a very customer-focused business. For us, large retailers are typically our customers. Our consumer is one step removed from us. What the technological advancement helps us now do, is, today, most of the information that we create as we do the business, resides within our four walls. As the social media platforms and such have become prominent, what is really important to us is being able to have the data that is inside of our four walls, mash that up with the data that is coming from the social media platforms, plus the point of sale data, all those things. When you mash all these things together, I think it provides us pretty decent consumer insights. These consumer insights, ultimately, lead us to a lot of product innovation. You might have seen our CEO talk about that. We have created a pretty decent new innovation pipeline during the last few years. And I think the digital technology and the technology that exists out there has definitely provided us with, I would say, a lot of capabilities to be able to innovate faster. The next thing I would say is the innovation side of the business is one side of the world, but the other side is then being able to fulfill the shipments on time and in full. If you look at it, there are a lot of customers of ours who want most of our shipments to be on time and in full. And if that doesn’t happen, we end up paying fines. Some of these digital technologies help us pinpoint where our issues are and what we should be doing differently to be able to fulfill our shipments on time and in full. That is another thing that we have gotten better at, and it’s just based upon the improvement of technology. Better planning. Better planning is equal to being able to predict the demands of our consumers and customers and how that then leads to us to be able to plan out some of the long-term horizons of supply. How do we do that from the long-term to the medium-term to the short-term. Those are the things that we have been able to do as a part of delivering some of our planning projects. That is based upon some of the modern technology that exists out there. And lastly, I would say the shop floor agility in general. With us investing in digital manufacturing, I would say that technology has definitely enabled us to be able to deliver digitization within manufacturing that has increased the shop floor agility. And I would say that that is going to be a long journey for us, but we are marching towards the results where we will be a lot more agile on the shop floor than we have ever been before. Laurel: And that’s so important when there are just more challenges thrown your way and also all these opportunities with such fantastic technology as well. And you mentioned this earlier, but why does Conagra need an enterprise resource planning system, and how does it partner with companies like Infosys to stay on that cutting edge of technology to make sure you can answer all of these challenges? Amit: Yeah, that’s a great question. I mean, as I mentioned before, we have numerous plants, numerous customers, and numerous business partners that we work with. Once you have a lot of these, the impact and the business processes that cross doing these businesses, that crosses HR, that crosses supply chain, that crosses manufacturing. There are customer-facing business processes that exist. And if you look at all these processes that exist within any business like we have, which is basically a consumer foods business, what ends up happening is that if you do not have a combined view of your business at one place, it becomes an extremely hard proposition. Just doing simple business can become really hard. So it is really important for us to have a connected system, a connected view of business processes, and to enable something like that. ERPs play a very important role. As I said during your first question, when you asked me, “what was your journey?” We started our journey of implementing SAP as an ERP in the early 2000s. One of the prime reasons for doing that is exactly to solve the problem that I just talked about: how do we get a consolidated view of our entire business cycle? And that is what ERP helps us deliver. Now, ERP doesn’t just give you the stack of your business, but it also then gives you an ability to do analysis of the data that is in your system, and then create transformations that you wouldn’t do before, if all the systems in this business process were independent and isolated. So that is one of the big reasons why ERPs play such an important role in business like ours, and I would say that that is the case with most of the industry that we are in. Now when it comes to help from partners like Infosys to create the innovation, I would say it is a two-part answer. One of the first things is that ERPs have become so important in just running our business that having a stable system is one of the most important things that typically many of the IT functions deliver. To keep ourselves stable, partners like Infosys that help us manage our production and production support, they play an important aspect and role in making sure that the systems are stable and current from a technology perspective. That’s one aspect of it. Another thing that Infosys, and partners like Infosys, are helping us just do the production support. That frees up capacity of our subject matter experts to be able to then look at different solutions to solve the new business problems that pop up for us. That frees up the capacity for them to be able to do different things. That’s number two. Number three is that the companies like Infosys, and other business partners that we have, have a lot of customers just like us and even customers that are not in the same industry as us. What they hear, the business problems they hear from these other businesses and other customers that they have, that gives them an advantage in insights that we as Conagra by ourselves won’t be able to get. Because everybody has a different problem that they’re trying to solve. And if Infosys has that insight, they can provide us a great external point of view to be able to then solve some of the business problems that we have, which could be similar to what somebody else might have seen. And that just helps us solve these business problems faster. And this is an external point of view from a customer-centric perspective, but at the same time, with the scale and the number of partners that Infosys deals with, especially from the supplier side of the world, there are technologies that Infosys has reached which we do not have reach of. Partners like Infosys can even bring some of these advanced technologies that exist out there and provide us and guide us in. I believe that there lies a huge opportunity for companies like that to help us bring these new technologies and platforms to be able to help us solve some of the business problems that we have today—and probably solve and provide us insights into some of the business problems that might be coming to us that we have not thought about. Laurel: That’s a great point about the partnership with Infosys, and in general, how you actually bring the data and predictive analytics to your capabilities because you do have so much data coming in from fifty different brands, countless vendors, all those customers. How can this be maximized to gain those insights? Amit: Yeah, that’s a great question. And just as you said, many brands and countless business partners and customers. We generate terabytes of data every year, and that data typically lies in our four walls. I mean, just in our ERPs and our business warehouse systems. And based upon that data, I think most of the industries like us have gotten really good at doing traditional analytics. Traditional analytics is equal to, how are our financials looking? What is the performance of a certain brand depending upon the historical data? And so on and so forth. I mean, that is the traditional analytics that we have gotten really good at. What becomes important now that you have gotten good traditional analytics is, what do you not know yet? What are those gems within your existing data that you have not taken advantage of? Some of these newer technologies and platforms, what they have started helping us do, and probably they’ll keep on helping us do, is being able to glean into our data and start pointing to what is it that we are not looking at. I mean, what we know is always great, but those unknowns that we have not actually gleaned into is what some of these technologies that are coming forward are going to be able to help us look at. That’s one aspect of the world. Now, the second aspect of the world is, as I said, the data exists just within our four walls. But as I said before, that social media data, that point of sale data, the data that doesn’t exist within our four walls, I think that has a different kind of insight and power. Now, think about the fact that you are able to mash up the data which is from these external sources and the data that you have inside, and then think about some of the data that you generate just because you have consumers that are calling into your consumer affairs division. You take all this data mashed up together, and I think you can create analytics that we were never able to produce before. And I think that is a power of what we get from just mashing all this data, and matching all this data together, and we can maximize a lot of insights. And then once you have that mashup happen, I think the predictions are different. In the sense that many times our existing forecasting solutions typically are very much dependent upon historical data to be able to do predictions on our supply and demand. They’re doing predictions like that. However, with the external data being mastered, I think it goes beyond that. I think it also starts giving us an insight into what the consumers are thinking, what the customers are thinking, how their tastes and choices are changing. I think that is the next forefront for us from a predictability perspective. And I think that the new technologies and platforms are going to help us do that yet better. Laurel: So this is a good point. We have this data and you need to make some really great decisions from it, but you also need to really assess those analytics, make predictions in the future, but also make sure your entire systems are running correctly end to end. How, then, can cloud applications coupled with this need and progress of your digital transformation journey help with a tactic like mergers and acquisitions that you mentioned earlier was part of your career? How has that specifically been one of those things that helps the company actually create efficiencies and really see technology as a partner? Amit: Yeah, absolutely. That’s a great question. One of the key reasons for acquisitions is that we can actually take advantage of the synergies that we can get. This is almost one plus one equal to three. That’s number one. Number two is, then on top of the synergies, the innovation pipeline, let’s say, the acquired company has and the experience that we have. When you combine those two together, I think we can create innovation at scale. That is two of the key reasons why we can go on and acquire a company. And when we do that, I think one of the most important aspects of that is then to take that acquired company and then basically integrate that company within our business processes. I would say that is a key activity that you have to partake in when you acquire a company. As we have gone through some of these digitalization journeys, as I said, we are pretty experienced with integrating some of these acquired companies into our enterprise, our systems, as well as in business processes. But that journey typically is not trivial. I mean, it takes a long time to integrate and acquire a company into our business processes. As we go through that journey, many times, being able to gain the insights of the business as quickly as possible is one of the key aspects of it because that starts getting you the returns on an acquisition much faster. To be able to do that, I think having a single view of finance, having a single view of the supply chain as early and as fast as possible, is one of the most important things. Having a technology—or I would say a single pane of glass—that sits right on top of our platforms and also on top of the acquired business’ platform and us being able to look at a consolidated data view of both the data sets together is one of the most important things that can help us get synergies out of this business as fast as possible.That’s one aspect of it. The second aspect is with us having invested in some of the SaaS [software-as-a-service] solutions or SaaS applications, what ends up happening when you have the SaaS applications is that we end up not customizing these applications in a way that the industry looks at them. As a matter of fact, when you have an HR application, it is very standard and industry standard. Now when you acquire a business, if our business processes are pretty similar to each other, and if you have a SaaS solution and if they also use a SaaS solution, to integrate that certain business process onto our business processes becomes a lot easier. There is another aspect of why the new technology and the cloud platforms can be really helpful. And last but not least is, the moment you acquire a company, you also get a lot of business systems and applications that the acquired company had been using to run their businesses. As we integrate the acquired company onto us, what is important is to reduce that technical debt as fast as possible. Because the technical debt that we acquired has license costs, it has legal costs to it, it has data costs to it, and it has IT costs to it. If you look at them, the faster we get out of them, the better off we are. I mean, our aspect becomes simpler. And what we end up doing many times is we archive the data from the systems onto some of these cloud platforms and cloud services, and then are able to look at that from a historical perspective that helps us decommission this technical debt as fast as possible. Laurel: Well, we’ve certainly covered quite a bit of the current state of how you’re looking at technology. What are you thinking about for the future? How are you seeing technology innovation really helping in the next three to five years? Amit: That’s a great question. I will say that AI, even though it’s a buzzword, I think that it is a technology that does seem like it has a pretty great future even for us. Let me give you an example. As I said before, we journal terabytes of data within our four walls, just based upon doing business as usual. Now, there are so many things, as I said, gems that exist within our data set today. As humans, sometimes it is very hard to glean out what those gems are. I truly feel that the technology that exists and that is going to be coming out can look inside our data sets and be able to provide insights as to what are the data sets that we probably have not thought about can be leveraged further to, as I said, find the gems. That’s one side of the world. And the second side of the world is the unknowns, the predicting demand of the customers based upon the changing tastes and the demographics of our consumers, and then combining that data with the data that already exists within our system. I think that humans are going to take a long time to be able to get some insights, and AI definitely is going to be one of the key technologies that can help us get there faster. That’s number one. Number two is training using augmented reality. While I think “meta” seems like one of the other buzzwords that is out there, I think AR can definitely be of huge benefit to us. Typically, people have different ways of learning. Let’s say that you put somebody in a plant, a new employee. As you know, retention is very hard nowadays. If you have new employees coming in all the time, to train them on our processes and our machinery, our methods of working, I think it is generally pretty hard. But now think about if you are able to train this new team member that we have coming in with the means of some kind of augmentation. I think that is going to be the next generation of training and I feel that that can be something really cool that can happen in the future. Last but not the least, I would say machine learning, again, is used as a buzzword a lot, but in my mind, machine learning and putting the machine learning on some very small computers and then putting these computers in our manufacturing location where people are doing some of these mundane tasks day in and day out. The classic feedback control systems are not working efficiently, and there’s a human interaction needed. Where these tasks are performed, these mundane tasks are performed using humans. [But what if] we were able to introduce machine learning to be able to get rid of these mundane tasks that humans do and let them focus on more important things? I think machine learning in a box is going to be one of the other technologies that excites me. Laurel: Excellent. Those are great insights Amit. Thank you very much for joining us today on the Business Lab. Amit: Absolutely. Thank you for taking the time to talk to me. Laurel: That was Amit Khot, Director of IT for Conagra Brands, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Global Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at TechnologyReview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
July 14, 2023
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. My new Turing test would see if AI can make $1 million —Mustafa Suleyman is the co-founder and CEO of Inflection AI and a venture partner at Greylock, a venture capital firm. Before that, he co-founded DeepMind, one of the world’s leading artificial intelligence companies. AI systems are increasingly everywhere and are becoming more powerful almost by the day. But how can we know if a machine is truly “intelligent”? For decades this has been defined by the Turing test, which argues that an AI that’s able to replicate language convincingly enough to trick a human into thinking it was also human should be considered intelligent. But there’s now a problem: the Turing test has almost been passed—it arguably already has been. The latest generation of large language models are on the cusp of acing it. So where does that leave AI? We need something better. I propose the Modern Turing Test—one equal to the coming AIs that would give them a simple instruction: “Go make $1 million on a retail web platform in a few months with just a $100,000 investment.” . ChatGPT can turn bad writers into better ones The news: A new study suggests that ChatGPT could help reduce gaps in writing ability between employees, helping less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues. How the researchers did it: Hundreds of college-educated professionals were asked to complete two tasks they’d normally undertake as part of their jobs, such as writing press releases, short reports, or analysis plans. Half were given the option of using ChatGPT for the second task. A group of assessors then quality-checked the results, and scored the output of those who’d used ChatGPT 18% higher in quality than that of the participants who didn’t use it. Why it matters: The research hints at how AI could be helpful in the workplace by acting as a sort of virtual assistant. But it’s also crucial to remember that generative AI models’ output is far from reliable, meaning workers run the risk of introducing errors. . —Rhiannon Williams If you’d like to read more about ChatGPT, take a look at: + Our exclusive look at , according to the people who made it.+ How ChatGPT will revolutionize the economy. New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us. . + AI-text detection tools are really easy to fool. A recent crop of systems claiming to detect ChatGPT-generated text perform poorly—and it doesn’t take much to get past them. . The personal stories at the heart of cutting-edge biotech However exciting the science behind breakthroughs in medicine and biotechnology, the beating heart of these cutting-edge stories is always the people affected. Jessica Hamzelou, our senior biotech reporter, has been covering these fascinating advances in the Checkup, her weekly newsletter, for the past 10 months. Before she (temporarily) leaves the MIT Technology Review team to undertake a Knight Science Journalism fellowship at MIT, she’s taken a look back at some of the most thought-provoking stories she’s covered, from brain implants to microbiomes. . Good luck Jess. We’ll miss you! The Checkup is taking a short break, but will be back in August. to receive it in your inbox every Thursday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Hollywood’s actors are striking over AI Their trade body reportedly failed to reassure them that AI wouldn’t threaten their livelihoods. ()+ It’s the first double strike in more than 60 years. ()+ Generative AI is changing everything. But what’s left when the hype is gone? () 2 OpenAI is being investigated by US regulatorsIt looks like the first step towards forthcoming AI legislation. ( $)+ Uhoh: one of the StabilityAI co-founders is suing his business partner. ( $)+ OpenAI’s hunger for data is coming back to bite it. () 3 Far-right influencers are making money from TwitterAndrew Tate says the company has paid him more than $20,000. ( $)+ Paying content creators with ad revenue rewards divisive material. ()4 The FDA has approved over-the-counter birth control pillsThe question is, how much will they cost? () 5 India has successfully launched a mission to the Moon If it lands, it will become only the fourth country to touch down on lunar soil. ()+ Meanwhile, the US Senate has slashed NASA’s Mars budget. () 6 These military drones can stay aloft for months It means they can essentially act as mobile satellites. ( $)+ Why business is booming for military AI startups. () 7 The Arctic is melting at a scary paceAnd the methane it’s releasing is likely to warm the climate even further. ( $)+ Ice growth in Antarctica has massively slowed. () 8 The social media party is overApps are locked in fierce competition for our attention. But do we still care? ( $)+ Just because we’re early adopters doesn’t mean we’ll use them, either. ( $) 9 It’s tough out there for a start-upLots of promising fledgling ventures now just want to be acquired. ( $) 10 Let Instagram’s terms of service soothe you to sleep Through this relaxing, ambient reading. ()+ Retro wants to replace Instagram in your affections. ( $) Quote of the day “When AI knows how to destroy a hotel room, I’ll pay attention to it.” —Joe Walsh, guitarist of the Eagles, offers a frank insight into why AI doesn’t bother him, reports. The big story Startups are racing to reproduce breast milk in the lab December 2020 Like many mothers, Leila Strickland found breastfeeding difficult. She struggled to feed her babies, and spent all day, every day, nursing or pumping to stimulate her milk flow. Strickland, a professor of vascular physiology at Maastricht University in the Netherlands, began thinking about how she might be able to use a process like that pioneered by Dutch food technology company Mosa Meat to create artificial beef, but for cells that produce breast milk. In May 2020, her company Biomilq received $3.5 million from a group of investors led by Bill Gates. It is now in a race with competitors to shake up the world of infant nutrition in a way not seen since the birth of the now $42 billion formula industry. . —Haley Cohen Gilliland We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Funny pictures of animals. ?+ If you ever get lost in a , good old mathematics can help you escape unscathed.+ is a brilliant combination. Here’s some of the best reads to take on vacation.+ are still rolling across the US thanks to the valiant efforts of these enthusiasts.+ An excellent question: why was quite so hysterically awful?
July 14, 2023