The Future of Cybersecurity: Microsoft Launches AI-Powered Security Copilot

Microsoft introduces Security Copilot for cybersecurity professionals that enhance the quality of detection, speed of response, and ability to strengthen security posture.


Microsoft Launches AI-Powered Security Copilot

Microsoft has introduced its latest AI-based product, Security Copilot, which combines the company’s security-specific model with OpenAI’s GPT-4 generative AI. The new platform is aimed at allowing cybersecurity professionals to work at the speed and scale of AI, given the global shortage of skilled security professionals in the industry. Security Copilot operates on Azure’s hyperscale infrastructure and can catch what other approaches might miss, providing gains in the quality of detection, speed of response, and ability to strengthen security posture. The system also includes a learning system that can create and tune new skills, and it can integrate with end-to-end Microsoft Security products and third-party products.


World’s First AI-Generated Satirical News Website Launches, Outperforms Humans in Humor and Intelligence

Hey folks, AI can now make you laugh! Meet The FittAIst, the world’s first AI-generated satirical news website that uses advanced GPT technology to create funny headlines and articles. Their bold and thought-provoking headlines include “Parents Should Be Replaced by AI Before They Ruin Another Christmas” and “AI Releases List of Things Humans Should Have Already Figured Out for Themselves.” The FittAIst challenges the idea of human superiority and offers a fresh perspective, untainted by human biases. Plus, they have a “Human-Free Pledge” to ensure that no humans are involved in their content creation. So sit back, relax, and enjoy the unmatched wit of AI-generated satire.


Got It AI’s ELMAR Challenges GPT-4 and LLaMa, Scores Well on Hallucination Benchmarks

Got It AI has launched ELMAR (Enterprise Language Model Architecture), a large language model (LLM) for chatbot Q&A applications that can be integrated with any knowledge base. It is significantly smaller than OpenAI’s GPT-3 and can run on-premises, making it more cost-effective. It is commercially viable due to its independence from Facebook Research’s LLaMA and Stanford’s Alpaca. Got It AI claims that ELMAR offers benefits to businesses, with less expensive hardware and the ability to fine-tune the model on target data, without requiring costly API-based models. Got It AI compares ELMAR to other LLMs, including ChatGPT, GPT-3, GPT-4, GPT-J/Dolly, LLaMA, and Alpaca, in a study to measure hallucination rates. ELMAR was on par with other LLMs, performing accurately and accurately identifying incomplete and incorrect responses.


Midjourney Ends Free Trials of its AI Image Generator Due to ‘Extraordinary’ Abuse

Midjourney has ended the free use of its AI image generator due to misuse by users who created high-profile deepfakes, including one of Donald Trump being arrested and another of Pope Francis wearing a trendy coat. The company’s new safeguards were not enough to prevent misuse, so users must now pay at least $10 per month to use the technology. Midjourney and other AI generators have had trouble establishing policies on content, with concerns that bad actors may use them to spread misinformation. Some developers have strict rules, while others have looser guidelines. There are also concerns that AI-generated pictures may be stolen from existing images.


Google Shuffles Assistant Leadership to Focus on Bard AI

Google is reshuffling its Assistant leadership team, with the Vice President of Google Assistant’s business unit announcing in a leaked memo that the company is putting more focus on AI. As part of the reshuffle, the Assistant’s longtime Vice President of Engineering is leaving the company, and his position will be taken over by Google’s Vice President of Engineering, who previously worked on Google Pay. Additionally, the current Assistant Engineering Vice President will now lead engineering for Bard, Google’s AI chatbot. The move suggests that Google may eventually integrate Bard into Google Assistant, potentially making it more personalized and natural. However, concerns remain over the accuracy of AI-generated text and the potential for misinterpreting commands.


Super HI-FI & Elevenlabs Launch AI Radio

Super Hi-Fi, a global leader in AI-powered radio experiences, has partnered with  ElevenLabs, the world’s leading text-to-audio AI software, to create a fully customized and personalized radio experience. The partnership has resulted in “AI Radio“, a live radio station streaming 24/7, created and managed exclusively by AI using Super Hi-Fi’s MagicStitch™, ElevenLabs’ Prime Voice AI, and ChatGPT. With the integration of ElevenLabs’ voice creation tools into Super Hi-Fi’s production platform, broadcasters and music services can offer their audiences a completely AI-driven listening experience. The result is a digital, interactive, and personalized listening experience that offers all the benefits of professionally produced radio.


Ripple Co-Founder Joins Elon Musk in His Stance Against Further AI Development

Ripple co-founder Chris Larsen, Elon Musk, and Steve Wozniak are among 1,100 people who have signed an open letter urging AI research companies to pause the development of advanced artificial intelligence for half a year. The letter requests that no AI that can outpace the current language processor GPT-4 should be trained within the next six months. Musk has recently made several public statements against the further development of AI, and he is allegedly seeking to create a company that will rival OpenAI and ChatGPT. The signatories believe that advanced AI may take away around 300 million jobs in the future, according to a report by the Financial Times.


UK Rules Out New AI Regulator

The UK government has decided against the creation of a new AI regulator and instead will issue guidelines on the responsible use of AI for existing regulators. Critics have expressed concerns that AI could pose risks to privacy, human rights or safety, and can display biases against certain groups of people. While the government wants existing regulators to establish their own approach to AI governance, experts have warned that this could leave gaps in regulation. The guidelines set out five principles for regulators, including accountability and governance, contestability and redress, fairness, safety, security and robustness, and transparency and “explainability”.


AI Prompt Engineering: How Talking to ChatGPT Became the Hottest Tech Job With a Six Figure Salary

The rise of AI-powered chatbots has created a new job in the tech industry: AI prompt engineering. This job involves communicating effectively with AI algorithms to teach them how to respond and follow specific guidelines. The job doesn’t require a tech background, but rather a decent level of language and grammar skills, data analysis, and critical thinking. Many companies are currently looking to hire prompt engineers, with salaries ranging from €160,000 to €308,000. While prompt engineering is gaining popularity, there is no specific formula for it, making it more of an art form than anything else. However, with the increasing demand for prompt engineers, this is a role that many people might have to upskill into.


Multiple Red Flags Are Not Yet Slowing The Generative AI Train

Generative AI models like ChatGPT and Dall-E 2 developed by OpenAI have become increasingly prevalent, with over 100 million users experiencing their capabilities. While some experts believe that AI could boost productivity and global GDP, others have warned of the dangers of uncontrolled machines flooding the internet with falsehoods and threatening civilization. These concerns have led to calls for a six-month moratorium on the development of leading-edge models until better governance regimes can be put in place. Some experts are also concerned about the shrinking ethics teams at some big tech companies. To avoid public backlash, AI companies need to prove that their models align with humanity’s best interests, and independent institutions should audit their algorithms and restrict their use.


The Problem with Artificial Intelligence? It’s Neither Artificial nor Intelligent

Tech enthusiasts, investors, and Silicon Valley companies have been using the term “artificial intelligence” for decades. However, in reality, what we call AI is neither artificial nor intelligent. The term “artificial intelligence” is a misnomer that was created because of the allure of science fiction enthusiasts and investors. The systems we call AI today draw their strength from the work of real humans, including artists, musicians, programmers, and writers whose professional output is now appropriated in the name of saving civilization. The early AI systems were heavily dominated by rules and programs, but those of today, including everyone’s favorite, ChatGPT, are a form of “non-artificial intelligence.” The cold war imperatives that funded much of the early work in AI left a heavy imprint on how we understand it. The kind of intelligence that would come in handy in a battle is the type of intelligence that modern AI possesses, such as pattern-matching. However, many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalizations. Machines cannot have a sense (rather than mere knowledge) of the past, present, and future; of history, injury, or nostalgia. Without that, there’s no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. If we want such creativity to persist, we should also be funding the production of art, fiction, and history—not just data centers and machine learning.


The FTC Should Investigate OpenAI and Block GPT over ‘Deceptive’ Behavior, AI Policy Group Claims

The Center for AI and Digital Policy (CAIDP) has filed a complaint with the Federal Trade Commission (FTC), calling for an investigation into OpenAI and its AI product, GPT, due to algorithmic bias, privacy concerns, and inaccuracies that may violate consumer protection law. The complaint asks the FTC to prohibit OpenAI from releasing future versions of GPT and establish new regulations for the AI sector. It also alleges that OpenAI has violated the FTC’s AI guidelines by transferring responsibility for risks onto its clients who use the technology. The complaint is an early test of the US government’s appetite for directly regulating AI.


Publishers Worry AI Chatbots Will Cut Readership

Chatbots powered by AI and offering fuller results for queries could lead to fewer visitors to publishers’ sites, reducing both their traffic and revenue. Tech firms such as Google and Microsoft are developing the search tools, which provide information in paragraph form rather than links. Many media companies, including Condé Nast and Vice, fear they may lose business as a result. Task forces have been established to evaluate their response to the threat, and industry conferences have highlighted the issue. The News Media Alliance, which represents 2,000 media outlets including The New York Times, is also developing principles to protect publishers.