Transforming AI Interactions, ChatGPT Plus Now Offers a Rich, Insightful Browsing Experience Powered by Bing Search
OpenAI’s ChatGPT app can now search the web — but only via Bing
OpenAI has rolled out a new feature on its ChatGPT Plus app that allows the AI-powered chatbot to use Bing to search for answers to queries. Available on both iOS and Android, this feature, named “Browsing,” is especially handy for questions related to current happenings that go beyond ChatGPT’s training data. When disabled, the bot’s knowledge is stuck in 2021.
Earlier this year, Microsoft – who recently pumped a ton of money into OpenAI – revealed that Bing would be the go-to search engine for ChatGPT. As of now, this feature is in its trial stage for Plus users on the ChatGPT web app, which can smartly scour the web for answers on the latest topics and events. The free version of the app, however, only provides info up until 2021.
On a less contentious note, a new change allows users to tap on a search result and be taken directly to that point in the conversation. This update, along with the “Browsing” feature, is being rolled out this week, says OpenAI.
Amazon launches AWS AppFabric to help customers connect their SaaS apps
Amazon has launched AWS AppFabric, a no-code service designed to connect software-as-a-service (SaaS) apps in businesses. The service, accessible through the AWS console, allows users to integrate third-party apps and obtain an overview of app usage and performance.
AppFabric normalizes and aggregates data using AWS’s Open Cybersecurity Schema Framework for analysis, such as identifying security threats. It also allows setting of common policies across apps, and can alert teams to any strange activity in the apps.
This new service addresses the growing issue of managing multiple SaaS apps in organizations. A recent survey showed more than 30% of businesses experienced duplicated work due to several SaaS applications. Handling app integration in-house can be costly and time-consuming, making services like AppFabric a convenient alternative.
AppFabric stands out from other iPaaS solutions due to AWS’s reputation as a trusted entity. Currently, it supports 17 apps, with more to come, including an AI assistant developed by Amazon’s Bedrock generative AI platform. This AI assistant will be capable of generating insights and creating content based on context from various SaaS apps.
Databricks continues generative AI push, launches LakehouseIQ, Lakehouse AI tools
Databricks is stepping up its use of generative AI, revealing LakehouseIQ at its yearly conference. LakehouseIQ is an AI tool designed to simplify access to data insights for everyone. It’s part of a series of innovations Databricks is launching aimed at helping customers build and manage machine learning models on their data platform, known as a lakehouse.
LakehouseIQ operates as an AI “knowledge engine” that lets anyone in a company search and analyze corporate data by asking questions in plain English. It doesn’t require any programming skills. Plus, it’s closely integrated with Databricks’ Unity Catalog, a product focused on unified search and governance, so it upholds internal security and rules.
Databricks is also introducing new features to its Lakehouse AI, including improved AI responses, a variety of open-source models available in the marketplace, and better visibility into data pipelines driving AI efforts.
In addition, Databricks launched Delta Lake 3.0, a data lake with improved performance and compatibility features, at the conference. These efforts reflect Databricks’ mission to make data and AI more accessible and secure.
Nvidia Drops on Report US Plans More AI Chip Curbs for China
U.S. chip stocks took a hit yesterday with the news of possible new restrictions on exporting artificial intelligence (AI) chips to China. Shares of companies like Advanced Micro Devices, Nvidia, and Intel, who count on China for about 20% of their revenue, fell between 0.2% and 1.8%. The overall Philadelphia chip index dropped 0.9%.
Last year, the U.S. told Nvidia to stop shipping its best AI chips to China to limit their tech advancement. Nvidia responded by releasing a new chip, the A800, within China’s export rules. But the Commerce Department is considering new restrictions that would require a special license to sell even the A800.
Investors are closely watching to see how restrictive the new rules will be for chipmakers. Because tech companies make up such a large part of Wall Street, any shift in confidence can have a big effect on the market.
CalypsoAI raises $23M to add guardrails to generative AI models
AI startup CalypsoAI has just raked in $23 million in its latest funding round, raising the company’s total investment to $38.2 million. It’s easy to see why investors are excited – businesses are on the hunt for AI tech, but are also cautious about its risks. Turns out, CalypsoAI has a fix for that.
The company has designed a platform that tests and watches over AI applications, ensuring they’re safe and reliable before being deployed. You could say they’re putting up some “guardrails” on generative AI models – the kind businesses are starting to rely on more and more. The platform keeps a close eye on AI models, like ChatGPT, showing useful stats and preventing any sensitive company info from being exposed or attacks from malicious AI tools.
NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark
NVIDIA’s H100 Tensor Core GPUs have taken the top spot in AI performance according to the latest MLPerf training benchmarks. These GPUs shine in powering generative AI and large language models (LLMs), setting records across all eight tests.
Using a cluster of 3,584 H100 GPUs, a tech startup named Inflection AI and cloud service provider CoreWeave completed a large-scale GPT-3 training benchmark in under 11 minutes. Inflection AI uses the performance of the H100 GPUs to fuel Pi, their first personal AI.
The H100 GPUs stood out in all areas tested, including large language models, recommenders, computer vision, medical imaging, and speech recognition. It was the only hardware to run all eight tests, showcasing the NVIDIA AI platform’s versatility.
Vertical AI: The next logical iteration of vertical SaaS
Vertical SaaS (Software as a Service), which is cloud-based software designed for specific industries, has been gaining popularity due to consumers’ demand for tailored solutions to their business problems. Big tech firms, such as Amazon’s AWS, Microsoft’s Azure, and Google Cloud Platform, have adopted a vertical strategy, focusing on providing specific solutions for each industry.
AI is categorized into three layers: foundational models, AI infrastructure, and AI applications. Foundational models are the fundamental parts of the AI stack, led by companies like Anthropic, Cohere, and OpenAI. AI infrastructure is the tools and resources used to manage AI, including data enhancement, fine-tuning, databases, and model training tools. Businesses like Pinecone, Weaviate, and Scale are making strides in this area.
The last layer, AI applications, is being enabled by the advancements in foundational models and infrastructure. These applications have the potential to be used across industries and tasks, providing a wide array of solutions. The concept of vertical AI essentially takes these applications and tailors them to solve specific industry problems, mirroring the trends we’ve seen in vertical SaaS.
Gather AI buys drone inventory competitor, Ware
AI company Gather AI has purchased its drone inventory competitor, Ware, in a move to scale up and serve their growing market more effectively. The acquisition comes after a series of quiet months from Ware, which included a last-minute CEO change in February. The merger will benefit from both Gather AI’s more mature technology and Ware’s go-to-market approach.
The use of drones to track warehouse inventory is still a relatively fresh idea, despite notable partnerships such as IKEA’s collaboration with Verity. The union of Gather AI and Ware aims to accelerate this technology’s adoption and serve 25 customers, a figure they say was evenly split before the merger.
Neither company builds drone hardware themselves, instead relying on Skydio (Ware) and DJI (Gather). The newly merged company, retaining the name Gather AI, claims a focus on being “hardware agnostic.” It aims to scale like a software company, believing investment in hardware development is not necessary for this niche.
BlackRock Joins AI Mania, Calling It a Potential ‘Mega Force’
BlackRock, the world’s biggest asset manager, is doubling down on artificial intelligence (AI), calling it a potential ‘mega force’. They see it as a major opportunity for increased productivity, and have identified semiconductor manufacturers and companies with lots of data or high potential for automation as the likely winners in this new AI-driven world.
Even as BlackRock prepares for a minor recession and short-term decline in developed-nation equities, they’re selectively buying into tech. Their eye is on the long game, believing that long-term investors can ride out short-term hiccups. Despite some bumps in the road earlier this year, better-than-expected economic data and corporate earnings, along with the rise of AI, have breathed new life into stocks.
BlackRock sees opportunities in the rise of AI, private direct lending, and the move towards a greener economy. They believe AI’s current rally, though reminiscent of the 2000 dot-com bubble, is more grounded because it’s supported by actual demand.
Google is having productive talks with the EU on A.I. regulation, cloud boss says
Google Cloud’s CEO, Thomas Kurian, revealed that Google is engaging in fruitful early-stage talks with the European Union (EU) about the latest artificial intelligence (AI) regulations. Their goal is to construct AI tools that are safe, responsible, and compliant with these regulations. One of the EU’s concerns is distinguishing between human-generated and AI-produced content. Google is addressing this through the development of technologies like an AI watermarking system.
Google is committed to cooperating with the EU government to ensure it understands their concerns, while also providing tools to recognize AI-generated content. Kurian emphasized that Google is not against regulation, and is working with governments in the EU, UK, and other countries to adopt these technologies responsibly.
ChatGPT Maker OpenAI Faces Class Action Over How It Used People’s Data
OpenAI, the company behind the popular chatbot ChatGPT, is staring down the barrel of a class-action lawsuit spearheaded by a California law firm. The suit claims OpenAI trampled on the copyrights and privacy of multitudes by using their data, scraped from various corners of the internet, to train its tech.
The legal battle, which seeks to represent “real people” whose information was co-opted for this technology, touches on an unresolved issue with these AI tools: Is it legal to use publicly available internet data for potentially profit-making tools? Some developers claim this falls under “fair use” in copyright law, which allows exceptions for transformative changes to the material.
The suit adds to the growing list of legal troubles faced by AI tech firms. OpenAI and others, including Google, Facebook, and Microsoft, are under scrutiny for their use of massive amounts of data scraped from the open web to train AI models.
AI-generated tweets might be more convincing than real people
Folks found tweets made by an AI language model, GPT-3, more believable than those penned by humans. Spitale’s team collected tweets on 11 science topics and had GPT-3 write new tweets with correct or incorrect info. They got responses from 697 participants online, mostly English speakers from the UK, Australia, Canada, US, and Ireland.
The AI-written content seemed no different from human-created tweets, the study concluded. This has its own challenges, including the fact that people had to judge tweets without context, like the author’s Twitter profile, which might have helped them identify if it’s a bot.
The study found that people were better than GPT-3 at judging accuracy in some cases. By improving training data for these models, it could be harder for bad guys to use these tools to spread lies. The study’s best strategy to counter misinformation? Encourage critical thinking skills so people can better tell facts from fiction. Folks skilled at fact-checking could team up with AI models to improve valid public info campaigns.
Meet The Humans Trying To Keep Us Safe From AI
Artificial intelligence (AI) is impacting our lives at a quick pace and shaping the future in unprecedented ways. However, it’s not just machines, but people – increasingly diverse and ethical-focused – who are steering the direction of this tech revolution. These include researchers, entrepreneurs, activists, and even artists who are adding depth and dimensions to the AI narrative.
Rumman Chowdhury, after her stint with Twitter, co-founded Humane Intelligence to crowdsource ways of unveiling weaknesses in AI systems. Sarah Bird at Microsoft is on a mission to prevent AI from producing biased and harmful outputs. Yejin Choi, a professor at the University of Washington, is designing an open source model with moral discernment.
Margaret Mitchell, dismissed from Google’s Ethical AI team, is now an ethics chief at Hugging Face and works to avert unanticipated issues from AI and keeps a human-centric perspective. Inioluwa Deborah Raji, once part of a project that discovered bias in facial recognition tech, is now with the Mozilla Foundation, developing open-source tools to scrutinize AI systems for flaws.
Daniela Amodei, formerly of OpenAI, co-founded Anthropic, a company focused on the ethical development of AI. Lastly, Lila Ibrahim, COO at Google DeepMind, is making it her mission to ensure AI’s growth positively impacts society. Together, these individuals and their peers are working to ensure AI serves as a beneficial tool rather than a dystopian agent.