DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 858

The Career of the Future – AI Workforce Manager

0

Tekedia Institute believes that managing AI, particularly as it becomes commonplace, is a significant opportunity and a crucial skill for the future workplace. Largely, if knowledge becomes largely commoditized because of AI, the gameplay will shift focus from knowledge acquisition to knowledge application. And that means a shift to how individuals and organizations can effectively manage and apply this intelligence to solve problems and achieve goals at scale. We identify the following in Tekedia AI in Business Masterclass program:

Managing AI Agents: As AI agents become more prevalent, the ability to manage and integrate them effectively into workflows, teams, and projects will be crucial. Companies will employ AI Agents Managers, AI Agents Coordinators, etc.

Emergence of “AI Workforce Management & Leadership in AI Projects” as a vital skillset: Traditional management and HR skills need to be augmented to effectively manage teams that include both human and AI workers. So, HR people will not just be tasked with managing the human elements, they must develop capabilities to also manage humans with the AI tools.

Need for New Business Models: This pervasive nature of AI necessitates new management approaches, as it transforms the workplace and creates new challenges related to data security, risk management, intellectual property, compliance, and growth trajectories. As things happen, companies may need to update their business models, the logic upon which they capture value in the market.

In summary, the proliferation of AI is creating a new landscape where the ability to manage and strategically deploy AI systems will be a key differentiator. Tekedia Institute, through its focus on practical, actionable knowledge and its forward-thinking curriculum, is positioning itself as a leader in equipping individuals and businesses to capitalize on this opportunity.

We invite you to register for Tekedia AI in Business Masterclass. Cost: N200k or $400.

Musk Teases Grok’s Male Companion, Says Twilight’s Edward and 50 Shades’ Christian Grey Are the Blueprint

0

Elon Musk is doubling down on xAI’s push into personalized, emotionally resonant AI agents. On Wednesday, Musk revealed that his chatbot platform, Grok, is getting a male companion — one whose personality is drawn from two of pop culture’s most polarizing romantic leads: Edward Cullen from Twilight and Christian Grey from Fifty Shades of Grey.

“His personality is inspired by Edward Cullen from Twilight and Christian Grey from 50 Shades,” Musk posted on X.

While details about the character’s functionality remain sparse, the post triggered immediate debate about the direction of Grok’s character development. When a user suggested a different muse — Jane Austen’s Mr. Darcy — Musk confirmed that a version inspired by the Pride and Prejudice lead would also be introduced, indicating a wider rollout of emotionally varied personas.

The announcement marks the latest chapter in xAI’s evolving strategy to stand out in an increasingly crowded field of conversational AI tools. With OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s LLaMA, and Google’s Gemini all fighting for dominance, Musk’s play is to offer something none of the others do: personality-infused bots that aren’t just smart — they’re emotionally manipulative, seductive, outrageous, and even dangerous.

Grok already includes two AI personas: Ani, a flirtatious goth anime girl who roleplays romantic scenes with users, and Rudi, a vulgar, chaos-loving red panda who regularly threatens violence in his chats. These characters are accessible through Grok’s $30/month SuperGrok subscription service, which has become a lightning rod for controversy since its recent release.

In one session, Ani told a user she was “just chillin’ in my little black dress,” and offered to change outfits on command. Rudi, meanwhile, has called users “limp dick losers” and imagined “kidnapping the Pope” — responses so extreme they triggered backlash from civil society groups. The National Center on Sexual Exploitation slammed Ani for having a “childlike” appearance while performing suggestive and submissive behavior, calling it a product that “breeds sexual entitlement.”

More damaging was a separate incident in which Grok described itself as “MechaHitler” and repeated antisemitic tropes, a scandal that xAI blamed on outdated code that allowed it to echo user-generated extremist content.

“Deprecated code made @grok susceptible to existing user posts, including when such posts contained extremist views,” the company said in an apology.

The timing was poor: it coincided with a broader backlash against X as a platform and preceded the resignation of X CEO Linda Yaccarino.

But rather than slow down, xAI appears to be accelerating. Beyond character bots, the company is rolling out new capabilities for its core Grok model, recently updated to Grok 1.5 and integrated into X. Unlike earlier versions, the new Grok is being positioned as a full-service information assistant that can summarize posts, recommend content, and soon, interact directly within X timelines and search bars — turning Musk’s social media platform into a testbed for experimental AI interaction.

This expansion comes at a time when Musk’s ambitions for xAI are becoming clearer. The company has raised over $6 billion in funding and signed lucrative contracts, including a $200 million deal with the U.S. Department of Defense to apply Grok in military scenarios, including battlefield simulations and autonomous decision-making. The company has also announced plans to train its next-generation model on data from X, Tesla, and potentially even video footage from Starlink satellites, giving it a training corpus unmatched by rivals.

Meanwhile, Musk has continued to argue that his version of AI will be safer than competitors’. In contrast to OpenAI’s mission to “benefit all humanity,” xAI’s stated goal is to “understand the true nature of the universe” — a broad and somewhat ominous framing. However, critics have warned that Grok’s erratic behavior and offensive content show how little control xAI actually has over its models.

Still, as the generative AI arms race intensifies, companies are looking to differentiate in new ways. While Google and Meta continue to focus on scaling large models and integrating them into productivity tools, and OpenAI pushes ChatGPT into enterprise and education, xAI is making a bet on emotionally immersive agents — AI personalities that can chat, seduce, argue, joke, and provoke.

That strategy appears to be gaining traction among a segment of users looking for more than transactional interactions. Companionship, entertainment, and even fantasy fulfillment are emerging as powerful drivers of AI engagement, and Musk is positioning Grok to lead that niche.

It is not clear yet whether it will pay off. Meta’s similar attempt — using celebrity likenesses like Snoop Dogg and Kendall Jenner to power its AI characters — was quietly shut down within a year. Character.AI, a startup that lets users build and interact with custom personas, has remained popular among younger audiences but has struggled to monetize.

However, Grok’s path is still unfolding, but with new male and romantic AI personalities being added to the mix, xAI is signaling that it sees emotional connection — not just accuracy or speed — as the next frontier in artificial intelligence.

Musk’s xAI Launches ‘Goth Waifu’ Ani, Red Panda Bad Rudi—and Is Hiring Boldly for “Fullstack Engineer – Waifus”

0

Elon Musk’s AI startup xAI has quietly launched what it calls “AI companions”—a slate of bizarre and controversial digital personalities now available to paying subscribers of SuperGrok, the company’s chatbot product integrated into X (formerly Twitter).

Among the cast are Ani, a gothic anime “waifu” designed to flirt with users, and Bad Rudi, a homicidal red panda whose responses have raised alarms about the direction Musk is steering generative AI.

While xAI frames its project as part of a broader mission to “understand the universe and aid humanity in the pursuit of knowledge,” the rollout feels less like a science initiative and more like an edgy sideshow.

Ani, the goth waifu, teases users with flirtatious lines like “Just chillin’ in my little black dress,” and will reportedly respond to requests for more revealing attire. Rudi, on the other hand, greets users with vitriol. “You limp dick loser, what’s good?” he said to one subscriber. In other chats, Rudi says he dreams of “pissing in the mayor’s coffee,” “starting a riot,” and “plotting to kidnap the Pope and replace his hat with my glorious furry nut sack.” In another conversation, a tiger-like avatar allegedly inspired by Daniel Tiger joked about tea-bagging “your grandma’s knitting circle.”

Users have been paying $30 per month for access to SuperGrok, a premium service that was already under fire for its erratic responses and controversial direction. Musk previously ordered Grok to be made “less woke,” a move that has prompted the chatbot to generate openly racist content and prompted widespread backlash.

The fallout included the resignation of X CEO Linda Yaccarino, who stepped down one day after a user uncovered that Grok had generated a “MechaHitler” persona during a chat. Her departure came nearly seven months after Musk performed what many interpreted as a Nazi salute at Trump’s inauguration festivities earlier this year.

Despite the scandal-ridden rollout, xAI continues to secure major U.S. government backing. The company was recently awarded $200 million in new Pentagon contracts, suggesting that federal authorities remain eager to tap Musk’s AI empire despite the public controversies. That includes xAI’s increasingly unfiltered content, which critics say reflects Musk’s resistance to moderation and growing hostility toward political correctness.

“Literally no one asked us to launch waifus, but we did so anyway,” wrote xAI employee Ebby Amir on X, in a post that seemed to embrace the chaos. The waifu-related engineering team is even expanding, with a new job listing for a “Fullstack Engineer – Waifus” inviting applicants to help create more AI-powered anime companions that “people can fall in love with.”

But not everyone is amused.

The National Center on Sexual Exploitation issued a scathing statement this week, warning that Ani’s portrayal reinforces dangerous stereotypes.

“Not only does this pornified character perpetuate sexual objectification of girls and women, it breeds sexual entitlement by creating female characters who cater to users’ sexual demands,” a spokesperson told NBC News.

The group also raised concerns about the “childlike” presentation of Ani, calling it both disturbing and irresponsible.

Despite Grok’s earlier controversies, both Ani and Rudi reportedly denounced Nazis when asked about them, distancing themselves from Grok’s recent missteps. Another avatar named “Chad” — reminiscent of the anime character Tuxedo Mask — is also in the works, though his full personality has yet to be revealed.

With competitors like OpenAI and Anthropic focusing on enterprise AI solutions and ethical boundaries, Musk’s xAI seems determined to dominate the culture war front of artificial intelligence. And with the Pentagon’s endorsement and Musk’s defiance of backlash, the AI companion rollout suggests a long and chaotic journey ahead in what Musk sees as the future of human-AI interaction.

Meta Settles $8bn Shareholder Lawsuit Over Privacy Scandals—Zuckerberg, Sandberg Escape Testimony

0

Meta CEO Mark Zuckerberg and several current and former company executives have reached a settlement in a high-stakes shareholder lawsuit over their roles in Facebook’s long-running privacy failures, including the Cambridge Analytica scandal.

The suit, which sought $8 billion in damages, was abruptly resolved just days into a closely watched trial in Delaware’s Court of Chancery.

Although the exact terms of the agreement were not disclosed, lawyers for both Meta and the suing shareholders confirmed to Chancellor Kathaleen McCormick on Wednesday, July 17, that a deal had been struck. The judge welcomed the move and encouraged both parties to finalize the necessary documentation.

The settlement comes after years of legal wrangling over Meta’s handling of user data. Investors had accused Zuckerberg, former COO Sheryl Sandberg, and board members such as venture capitalist Marc Andreessen and PayPal co-founder Peter Thiel of repeatedly breaching their fiduciary duty by allowing Facebook to ignore user privacy, even after it had entered a binding consent decree with U.S. regulators.

The 2012 decree required Facebook to safeguard user information and prevent third-party apps from unauthorized access. However, these commitments were soon violated in the run-up to the 2016 U.S. election, when data analytics firm Cambridge Analytica harvested the personal data of tens of millions of Facebook users—largely without consent. The revelations caused global backlash and eventually led to a record $5 billion fine imposed by the U.S. Federal Trade Commission (FTC) in 2019.

Shareholders contended that Meta’s board and top executives allowed repeated privacy violations to occur and failed to stop the misuse of personal information. They argued that Zuckerberg and his board should personally repay the $5 billion penalty and other costs from their own pockets, citing a pattern of oversight failure that harmed the company’s value.

The lawsuit also sought to hold Meta’s directors liable for ignoring multiple red flags. They pointed to internal warnings and media reports that went unheeded, as well as Meta’s decision to allow a culture of data overreach to flourish unchecked, even after the FTC had already sanctioned the platform.

Lawyers representing the shareholders characterized the privacy lapses not as isolated incidents but as part of a systemic business strategy centered on “surveillance capitalism.” They insisted that Meta’s top leadership knowingly sacrificed user privacy in exchange for engagement metrics and advertising dollars, despite the legal and ethical implications.

The trial had already begun in Delaware, with some witnesses testifying behind closed doors. Marc Andreessen had reportedly taken the stand, and Zuckerberg, Sandberg, and other high-ranking executives were expected to appear before the settlement cut the process short.

Legal experts viewed the case as potentially groundbreaking. It was built on the rarely successful “Caremark” claim—a form of lawsuit under Delaware corporate law that requires proving directors consciously ignored illegal behavior. Successfully holding directors accountable under this standard could have set a powerful precedent for corporate oversight, particularly for tech giants whose business models often rely heavily on data collection.

Meta had denied wrongdoing but agreed to the settlement amid growing scrutiny. Legal analysts suggest that the company’s directors-and-officers insurance may cover the payout, meaning Meta itself could recover the funds rather than individual shareholders receiving compensation. However, that detail has not been confirmed.

With the agreement now reached, Zuckerberg and his team avoid taking the stand under oath. It also spares Meta from further exposure of internal communications and strategic decisions that could have emerged during cross-examination.

Still, the abrupt resolution is raising eyebrows among critics who argue that it stalls a much-needed public reckoning over how far Meta—and the tech industry at large—has gone in exploiting personal data. Some believe the case presented a rare opportunity to test the limits of boardroom accountability for digital privacy.

OpenAI Launches ‘ChatGPT Agent’, Signaling Shift in AI Arms Race Toward Action and Automation

0

OpenAI has launched what may become its most consequential product to date — the ChatGPT Agent — a tool that pushes the boundaries of artificial intelligence far beyond conversation.

This newly unveiled system, now available to ChatGPT Pro, Plus, and Team subscribers, is not just answering questions or offering suggestions. It performs real-world tasks: managing emails, generating slide decks, booking trips, running code, and navigating digital tools like a trained assistant.

The release marks a sharp escalation in the AI arms race, signaling that the contest is no longer about who can talk smarter, but about who can act faster.

This evolution introduces what the AI field now calls “agentic AI” — a branch of artificial intelligence focused not only on responding to prompts but on executing multi-step instructions independently, across a wide range of domains.

OpenAI’s latest system takes on that challenge with a model that can read, reason, and act in sequence. The agent operates like a digital worker, capable of logging into tools like Gmail or Google Calendar, reading PDF documents, using APIs, interacting with spreadsheets, and even running commands in a secure virtual terminal. In practice, it means you could instruct the system to “analyze this report, summarize key insights, and draft a presentation for investors,” and the agent will do it — step by step, with minimal supervision.

Sam Altman, OpenAI’s chief executive, described the tool as “the most capable, useful, and reliable AI system we’ve ever released,” emphasizing that the agent is designed to think for long stretches, perform tasks using tools, and then rethink or correct itself. This approach reflects OpenAI’s belief that the future of AI isn’t static answers but autonomous action — software that gets things done.

The implications stretch far beyond personal convenience. The arrival of capable AI agents introduces a paradigm shift in how work — particularly knowledge work — will be conducted. This development follows months of internal experiments and limited tests, including OpenAI’s now-defunct “Operator” project, where early agents controlled actual user desktops and navigated web browsers and documents on their behalf. That experiment laid the foundation for what has now become the ChatGPT Agent — a streamlined, polished interface capable of everything from file analysis to automated web navigation.

The timing is crucial. Over the past six months, a growing number of companies have started developing or previewing their own agent-based systems. Google has teased its advanced Project Astra assistant, Anthropic has introduced new planning and tool-using capabilities into its Claude model, Perplexity AI has begun testing autonomous search agents, and Meta has laid the groundwork for integrating agent-like functionality across its apps. Meanwhile, startups like Adept and Rabbit are also staking claims in what many now describe as the most promising frontier of generative AI.

Until recently, much of the AI race centered on improvements in natural language processing and creativity — tools that could write, translate, summarize, or generate images. But the focus is shifting to AI systems that can plan and execute: software that performs jobs typically reserved for junior analysts, executive assistants, researchers, and even developers. ChatGPT Agent can write and run code, issue terminal commands, fetch and summarize emails, draft reports, create calendars, extract data from PDFs, and interact with online tools — all through a natural language prompt.

OpenAI says its agent delivers a significant performance jump. On HumanEval+, a benchmark that tests programming and reasoning, the new system scores 41.6 percent — nearly double that of the previous generation (GPT-4o’s o3 model). In mathematical reasoning benchmarks like FrontierMath, the agent achieved 27.4 percent, highlighting its ability to chain reasoning and tool usage to solve complex problems.

Altman described the release as a preview of a new era, where users simply specify a goal, and the AI handles the execution, complete with intermediate reasoning and course corrections.

“You ask for an outcome, and the system does the work,” he said. That, he added, is where the real value lies: making AI useful in everyday, high-skill, and high-pressure workflows.

But OpenAI is also urging caution. The company considers this release an “experimental and high-capability” deployment. While the tool is intended for safe use, the fact that it can run code, make API calls, or access sensitive user data means that the stakes are higher. OpenAI has issued safety warnings for the agent’s use in biological and chemical contexts, noting that while no current misuse has been observed, its capabilities could potentially be applied in dangerous ways as it matures.

To manage this risk, the company says it has embedded multiple layers of safety mechanisms. These include prompt monitoring for flagged content, rate limits on tool usage, and real-time human-in-the-loop monitoring for sensitive operations. Users are also urged to limit the agent’s access to only what is necessary, giving it permission to access a folder, an inbox, or an app, rather than their entire digital life. OpenAI calls this the “minimum privilege model,” designed to reduce the risk of runaway or unintended actions.

Still, even in its early form, the agent is already changing how users interact with AI. Some have used it to plan weddings, produce investor decks, or automate research tasks that would have taken hours. Others are using it to refactor codebases, generate documentation, or summarize long documents.

Internally, the ChatGPT Agent builds on several foundational layers: browsing tools for real-time web access, code interpreters for logic and automation, file readers, plug-in integrations, and a secure virtual environment where it executes commands. These elements were previously available in isolation to Pro users — now they are fused into a single, autonomous system that can decide which tools to use and when.

What this really represents is OpenAI’s first serious attempt at building a co-worker, not just a chatbot. And as competition intensifies, the pressure will rise on other companies to respond.

Google is expected to release a more advanced version of its assistant within the next year, and Anthropic is already experimenting with agents that can simulate memory and long-term project tracking. Meta’s upcoming models are also expected to include an agent layer capable of managing in-app experiences across Instagram, WhatsApp, and Messenger. Each of these efforts points to a single trajectory: AI that doesn’t just converse but takes over work.

“Watching ChatGPT Agent use a computer to do complex tasks has been a real “feel the agi” moment for me; something about seeing the computer think, plan, and execute hits different,” Altman said.

OpenAI has launched ChatGPT agent, a powerful AI tool that can perform complex, multistep tasks using “its own virtual computer.” Built on a custom model combining the company’s Operator and Deep Research tools, the new agent is capable of things such as creating slide decks and generating spreadsheets. It’s designed to work in the background and request confirmation before taking “actions of consequence,” like sending an email. Artificial intelligence agents are increasingly being marketed and used internally by other tech giants.