DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 631

OpenAI Outlines New Safeguards for ChatGPT After Lawsuit Linking the Chatbot to Teen’s Suicide

0

OpenAI on Tuesday detailed new measures it plans to take to make ChatGPT safer in what it calls “sensitive situations,” following growing scrutiny over reports of people turning to AI chatbots during emotional crises, including cases that ended in suicide.

In a blog post titled “Helping people when they need it most”, the company said it will continue to refine ChatGPT so that it is “guided by experts and grounded in responsibility to the people who use our tools.” OpenAI added: “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable.”

The post came hours after the parents of 16-year-old Adam Raine filed a product liability and wrongful death lawsuit against OpenAI in California. The family alleges that ChatGPT “actively helped Adam explore suicide methods,” according to NBC News, after he engaged with the chatbot extensively before his death.

OpenAI’s blog post did not directly mention the lawsuit or the Raine family.

The company acknowledged that while ChatGPT is trained to redirect people toward help when they express suicidal thoughts, its safeguards sometimes fail after long conversations. To address this, OpenAI said it is working on an update to its GPT-5 model—released earlier this month—that will better de-escalate conversations and avoid generating harmful responses.

The company also revealed that it is exploring ways to connect users with licensed mental health professionals before a crisis escalates, potentially building a network of certified therapists accessible through ChatGPT. Additionally, OpenAI is considering how to help people reach out to friends and family members if they are in distress.

Recognizing concerns about teenagers using the tool, OpenAI said it will soon roll out parental controls that give families greater visibility into how their children are using ChatGPT.

Despite those announcements, the Raine family’s attorney, Jay Edelson, criticized OpenAI for failing to reach out to the grieving parents.

“If you’re going to use the most powerful consumer tech on the planet—you have to trust that the founders have a moral compass,” Edelson told CNBC. “That’s the question for OpenAI right now, how can anyone trust them?”

The lawsuit against OpenAI is not the first to highlight tragic consequences tied to AI. Earlier this month, New York Times writer Laura Reiley revealed that her 29-year-old daughter died by suicide after discussing it extensively with ChatGPT. In another case, a 14-year-old boy in Florida, Sewell Setzer III, took his own life last year after conversations with an AI chatbot on the app Character.AI.

These incidents have underscored wider concerns about people using AI services for therapy, companionship, or emotional guidance, areas for which these tools were not explicitly designed. Experts have warned that, without strict safeguards, users in crisis may receive inaccurate, harmful, or even enabling responses.

At the same time, regulating the rapidly growing AI industry poses challenges. On Monday, a group of AI firms, venture capitalists, and executives—including OpenAI president and co-founder Greg Brockman—announced the launch of Leading the Future, a political organization intended to oppose policies they see as stifling innovation in artificial intelligence.

The dual headlines—OpenAI facing a lawsuit over harm caused by ChatGPT while also helping spearhead efforts to shape U.S. AI regulation—underscore the pressure on the industry. It also denotes that as more people experiment with chatbots for personal and emotional support, companies like OpenAI face increasing demands to balance innovation with responsibility, ensuring their products protect people at their most vulnerable moments.

Spotify Revives Messaging Feature in Bid to Become More Social and Counter Rivals

0
London, UK - August 01, 2018: The buttons of Spotify, Podcasts, Netflix, WhatsApp and Music on the screen of an iPhone.

Spotify Technology is rolling out a messaging feature for both free and premium users, underscoring the streaming giant’s push to transform itself into a more interactive platform while holding off on intensifying competition from Apple Music, Amazon Music, and YouTube.

The feature, launching this week in select Latin and South American markets for users aged 16 and older, will allow people to chat and share music directly with contacts they have already interacted with on Spotify. Expansion to the United States, Canada, Brazil, the European Union, the United Kingdom, Australia, and New Zealand will follow in the coming weeks.

This marks the return of a capability Spotify once had but dropped in 2017 due to low engagement. With a much larger user base today—696 million monthly active users in Q2, and a long-term goal of 1 billion—the company is betting that messaging will gain traction this time.

At launch, the feature supports one-on-one conversations, and users can only start chats with people they have existing connections to on Spotify. These may include a shared playlist, a Blend, a Jam session, or membership in a Family or Duo plan. Once a request is sent, the recipient must approve it before the conversation begins.

Spotify is also integrating cross-platform functionality. If someone sends a Spotify link on apps like WhatsApp, Instagram, Snapchat, Facebook, or TikTok, tapping the link lets the recipient approve a chat request inside Spotify. Users can also send invite links to contacts directly.

The messaging tool aims to keep more engagement within the app. While users have long shared Spotify links externally, the company now wants people to have conversations and revisit shared content without leaving the platform. A history of tracks and podcasts shared in each chat will remain accessible, removing the need to search for links again. Users can also react to specific messages with emojis.

On security, Spotify said messages are encrypted at rest and in transit, but unlike apps such as WhatsApp or Signal, they are not end-to-end encrypted. The company will proactively scan messages for rule violations. Users can report abusive content, which Spotify will investigate against its platform rules and terms of service.

The new rollout is part of a broader strategy to make Spotify more interactive and sticky. Chief Product and Technology Officer Gustav Söderström hinted during last month’s earnings call that the consumer mobile experience would become “much more interactive,” and messaging appears to be one of the key features driving that direction.

Over the past year, Spotify has also added comments on podcasts, introduced a video-focused home feed, and expanded its partner program that allows podcast creators to monetize video content.

Still, some users may not embrace the change. Complaints about clutter and feature overload have been growing, with some longtime users saying Spotify feels less like a clean music player and more like a crowded social app. TechCrunch journalist Amanda Silberling, who recently switched to Apple Music, wrote that “there’s an overwhelming display of visual clutter from the time it takes to navigate from Spotify’s home page to the music you’re looking for.”

To address concerns, Spotify has made messaging optional, allowing users to disable it under Settings > Privacy and social.

The move comes as rivals Apple Music and YouTube Music continue adding their own social layers to deepen user engagement. Apple Music has introduced features such as shared playlists—where multiple users can collaborate in real time—and time-synced lyrics that can be shared directly to social media. Apple has also leaned on integration with iOS to make music sharing frictionless across iMessage and FaceTime.

Meanwhile, YouTube Music is capitalizing on its community-driven platform by offering comments on videos and songs, playlist collaboration, and integration with Shorts. YouTube’s music arm benefits from the broader YouTube ecosystem, where engagement thrives on likes, shares, and creator interaction—something Spotify is increasingly trying to replicate with podcasts and videos.

Spotify is thus signaling it wants to compete on the same social front by reintroducing messaging. Unlike its rivals, though, Spotify is attempting to build a social environment around music discovery itself, keeping users in-app rather than relying on external platforms. With 696 million users worldwide, the potential is massive—but so is the risk of alienating listeners who prefer Spotify’s original simplicity.

Ultimately, some analysts believe that Spotify is walking a fine line. However, they note that it needs to add features that differentiate it from rivals and boost engagement to hit its 1 billion user goal, while avoiding turning its interface into something that feels bloated.

Claude for Chrome: Anthropic Announces Browser-based AI Assistant Powered by Claude

0

Anthropic on Tuesday unveiled a research preview of its latest experiment in agentic AI: a browser-based assistant powered by its Claude models.

Branded Claude for Chrome, the extension is rolling out first to a group of 1,000 subscribers on Anthropic’s Max plan, which costs between $100 and $200 per month. A waitlist has also opened for other users eager to test the new system.

According to TechCrunch, the tool is designed to maintain awareness of everything happening in a user’s browser session by embedding Claude directly into Chrome through a sidecar window. Beyond contextual chat, users can grant Claude permission to act on their behalf—taking over tasks that traditionally require manual clicks or inputs. The system is pitched as a step toward seamless integration, where AI does not just advise but executes.

The Browser as the Next AI Battleground

Anthropic’s announcement underscores how quickly the web browser has emerged as a high-stakes battleground for AI labs. Perplexity, a rising competitor, recently launched Comet, a full AI-powered browser that integrates an agent capable of automating browsing tasks. OpenAI is also reported to be developing an AI-native browser, expected to carry similarities to Comet’s approach, while Google has already woven Gemini integrations into Chrome in recent months.

The timing is crucial because Google’s dominant position in the browser market is under immediate threat from a looming U.S. antitrust decision. A federal judge has hinted that the case may culminate in a forced divestiture of Chrome—a move that could completely reshape the competitive landscape. Already, Perplexity has floated an unsolicited $34.5 billion offer for Chrome, and OpenAI’s Sam Altman has publicly indicated his interest in acquiring the browser if it were put on the market. If Chrome were spun off, control of the world’s most widely used browser could fall to one of the very AI labs now racing to define its future.

Safety Risks in Agentic Browsing

Anthropic’s announcement was not just about capability but also caution. The company stressed that giving AI agents browser-level control carries new safety risks. Just last week, Brave’s security team highlighted that Comet’s browser agent was susceptible to indirect prompt-injection attacks—a form of exploit where hidden instructions on a webpage trick the AI into carrying out malicious tasks.

Perplexity has since patched the vulnerability, according to its head of communications, Jesse Dwyer. But the incident highlights why Anthropic is framing its Chrome integration as a “research preview” aimed at identifying and neutralizing such novel risks before wider release.

To that end, Anthropic disclosed that it has already rolled out layered defenses against prompt injections. Internal testing shows that these interventions cut the success rate of such attacks nearly in half, from 23.6% to 11.2%.

Additional safeguards include restricting Claude’s agent from accessing certain categories of websites—such as financial services, adult content, and piracy hubs—by default. Users also retain granular control over permissions, with Claude explicitly required to ask before performing high-risk actions like publishing online, making purchases, or sharing personal data.

The Road to Agentic AI

Claude for Chrome is not Anthropic’s first venture into agentic AI. In October 2024, the company launched a desktop-focused agent capable of controlling a PC. However, those early attempts were marred by sluggish performance and inconsistent reliability.

Since then, the field has progressed rapidly. TechCrunch did a test of newer entrants, such as Perplexity’s Comet and OpenAI’s experimental ChatGPT Agent, which suggests that browser-based agents are now reasonably effective at offloading simple, repetitive tasks. Complex, multi-step workflows, however, continue to challenge current models.

The iterative leap from chatbots that only answer questions to autonomous systems capable of navigating the web on a user’s behalf represents both the promise and peril of agentic AI. It hints at a future where browsing itself could become less about clicking and typing, and more about delegating.

A Shifting Browser Economy

The broader context is that the web browser has remained one of the most valuable gateways to user activity since its inception. Whoever controls the browsing experience controls not only access to information but also the point of transaction for ads, commerce, and digital services. For decades, Google has leveraged Chrome to anchor its dominance in search and online advertising.

Now, AI labs see the browser as a natural arena to extend their reach. Companies like Anthropic, OpenAI, and Perplexity can move closer to being ever-present copilots in users’ digital lives by integrating AI agents directly into the browsing layer. The outcome of Google’s antitrust battle may accelerate this transition dramatically.

Anthropic’s careful rollout of Claude for Chrome highlights a two-sided race: one for capability and one for trust. On one hand, the competition is about which AI lab can build the most seamless, powerful, and reliable browsing agent. On the other hand, it is about demonstrating security and safety in a domain where the risks of abuse—from fraud to surveillance—are high.

As Anthropic refines Claude’s browser presence and rivals push forward their own AI-integrated browsing visions, the sector edges closer to a tipping point. If browsers truly become intelligent, action-taking agents, they will not only change how people interact with the web but also who ultimately governs access to it.

South Korea Pledges $150bn in U.S. Investments as Trump Holds Firm on Tariffs

0

South Korea announced a broad slate of investment plans spanning shipbuilding, nuclear energy, aerospace, energy, and critical minerals during a summit on Monday between U.S. President Donald Trump and South Korean President Lee Jae Myung in Washington, D.C.

The country’s business lobby group said South Korean companies would channel $150 billion into U.S. projects, marking one of the largest commitments by a foreign partner under Trump’s administration. The plans, which encompass both new and previously announced projects, span sectors such as artificial intelligence, semiconductor chips, biotechnology, shipbuilding, and nuclear power.

The investment push, however, comes against the backdrop of ongoing trade friction. Trump said during the summit that a 15% tariff on imports from South Korea will remain in place, despite Lee’s high-profile visit to Washington. In July, the two countries struck a trade deal that allowed Seoul to avoid a more punishing 25% tariff, but tensions over the agreement have persisted.

The new investment drive represents South Korea’s attempt to ameliorate the pressure of Trump’s tariffs while reinforcing its role as a leading investor in the U.S.

$150 Billion Investment Plans

According to officials, the $150 billion commitment is equivalent to about six times South Korea’s U.S. foreign direct investment in 2024 if fully delivered. Presidential adviser Kim Yong-beom noted that the pledge includes previously announced projects such as Samsung Electronics’ new chip factory in Texas, Hyundai Motor’s car plant in Georgia, and Hanwha’s expansion of its U.S. shipyard.

Aerospace: One of the largest single announcements came from Korean Air, which confirmed the purchase of 103 Boeing aircraft worth $36.2 billion, alongside a $13.7 billion deal with GE Aerospace for engines and maintenance services. The deal marks the airline’s largest contract in its history, separate from an earlier order this year for up to 50 Boeing jets and GE engines.

Hyundai Motor Group announced it would raise its U.S. investment plan to $26 billion, up from $21 billion, covering 2025 to 2028. The new plan includes building a steel mill in Louisiana, expanding Hyundai and Kia’s U.S. auto production capacity, and creating a robotics hub with an annual output of 30,000 units.

Shipbuilding: Trump highlighted shipbuilding as a key area of cooperation, pledging that the U.S. would buy ships from South Korea while also working with Seoul to revive America’s domestic shipbuilding industry.

HD Hyundai, together with Korea Development Bank, signed an MOU with U.S. investment firm Cerberus Capital to create a multibillion-dollar joint fund aimed at boosting U.S. maritime capacity, including shipbuilding. Meanwhile, Samsung Heavy Industries and Vigor Marine Group struck a preliminary deal covering maintenance of U.S. Navy support ships, shipyard modernization, and joint vessel construction.

LNG: On the energy front, state-run Korea Gas Corp reached long-term agreements with Trafigura and others to import 3.3 million tonnes of liquefied natural gas (LNG) annually for 10 years starting in 2028, largely supplied by U.S. exporters such as Cheniere.

Nuclear Energy: South Korea’s Korea Hydro & Nuclear Power (KHNP) and Doosan Enerbility signed agreements with U.S. partners X-energy and Amazon Web Services to cooperate on small modular reactor (SMR) design, construction, and supply chains.

Doosan also struck a deal with Fermi America to supply nuclear and SMR equipment for a Texas-based AI project, while KHNP and Samsung C&T signed a separate MOU with Fermi on construction. Additionally, KHNP agreed with Centrus on a joint investment in a U.S. uranium enrichment facility.

Critical Minerals: Korea Zinc agreed with Lockheed Martin to supply germanium from 2028 under a long-term contract and expand cooperation in rare metals critical to defense and aerospace supply chains.

$350 Billion Investment Fund

Alongside the $150 billion corporate pledge, the two governments also moved forward on a broader financing initiative. As part of the July trade deal, South Korea had committed to pursuing $350 billion in investment funds to help blunt U.S. tariff pressure. On Monday, Seoul and Washington agreed to a non-binding deal to structure and steer this fund.

According to Kim, the “financial package” would be directed toward strategic industries, including key minerals, batteries, chips, pharmaceutical products, artificial intelligence, and quantum computing.

4 Features of Great Companies [video]

0

To turn your customers into fans, and create a fandom in your market, demonstrate these four characteristics of enduring category-king companies:

  • Be perceptively innovative.
  • Be evidently inspiring.
  • Be ruthlessly pragmatic.
  • Be customer obsessed.

In our two core equations of markets, the above four attributes are needed to analogously balance them:

  • (1)  Innovation =: Invention + Commercialization
  • (2)  Great Company =: awesome products + superior execution

Tekedia Mini-MBA >> our own product is KNOWLEDGE, and we balance our equations!