Laravel News Links
The first GPT-4-class AI model anyone can download has arrived: Llama 405B
https://cdn.arstechnica.net/wp-content/uploads/2024/07/lama405b_hero_3-800×450.jpg

In the AI world, there’s a buzz in the air about a new AI language model released Tuesday by Meta: Llama 3.1 405B. The reason? It’s potentially the first time anyone can download a GPT-4-class large language model (LLM) for free and run it on their own hardware. You’ll still need some beefy hardware: Meta says it can run on a "single server node," which isn’t desktop PC-grade equipment. But it’s a provocative shot across the bow of "closed" AI model vendors such as OpenAI and Anthropic.
"Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation," says Meta. Company CEO Mark Zuckerberg calls 405B "the first frontier-level open source AI model."
In the AI industry, "frontier model" is a term for an AI system designed to push the boundaries of current capabilities. In this case, Meta is positioning 405B among the likes of the industry’s top AI models, such as OpenAI’s GPT-4o, Claude’s 3.5 Sonnet, and Google Gemini 1.5 Pro.
A chart published by Meta suggests that 405B gets very close to matching the performance of GPT-4 Turbo, GPT-4o, and Claude 3.5 Sonnet in benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).
But as we’ve noted many times since March, these benchmarks aren’t necessarily scientifically sound or translate to the subjective experience of interacting with AI language models. In fact, this traditional slate of AI benchmarks is so generally useless to laypeople that even Meta’s PR department now just posts a few images of charts and doesn’t even try to explain them in any detail.

Enlarge / A Meta-provided chart that shows Llama 3.1 405B benchmark results versus other major AI models.
We’ve instead found that measuring the subjective experience of using a conversational AI model (through what might be called "vibemarking") on A/B leaderboards like Chatbot Arena is a better way to judge new LLMs. In the absence of Chatbot Arena data, Meta has provided the results of its own human evaluations of 405B’s outputs that seem to show Meta’s new model holding its own against GPT-4 Turbo and Claude 3.5 Sonnet.

Enlarge / A Meta-provided chart that shows how humans rated Llama 3.1 405B’s outputs compared to GPT-4 Turbo, GPT-4o, and Claude 3.5 Sonnet in its own studies.
Whatever the benchmarks, early word on the street (after the model leaked on 4chan yesterday) seems to match the claim that 405B is roughly equivalent to GPT-4. It took a lot of expensive computer training time to get there—and money, of which the social media giant has plenty to burn. Meta trained the 405B model on over 15 trillion tokens of training data scraped from the web (then parsed, filtered, and annotated by Llama 2), using more than 16,000 H100 GPUs.
So what’s with the 405B name? In this case, "405B" means 405 billion parameters, and parameters are numerical values that store trained information in a neural network. More parameters translate to a larger neural network powering the AI model, which generally (but not always) means more capability, such as better ability to make contextual connections between concepts. But larger-parameter models have a tradeoff in needing more computing power (AKA "compute") to run.
We’ve been expecting the release of a 400+ billion parameter model of the Llama 3 family since Meta gave word that it was training one in April, and today’s announcement isn’t just about the biggest member of the Llama 3 family: There’s an entire new iteration of improved Llama models with the designation "Llama 3.1." That includes upgraded versions of its smaller 8B and 70B models, which now feature multilingual support and an extended context length of 128,000 tokens (the "context length" is roughly the working memory capacity of the model, and "tokens" are chunks of data used by LLMs to process information).
Meta says that 405B is useful for long-form text summarization, multilingual conversational agents, and coding assistants and for creating synthetic data used to train future AI language models. Notably, that last use-case—allowing developers to use outputs from Llama models to improve other AI models—is now officially supported by Meta’s Llama 3.1 license for the first time.
Abusing the term “open source”
Llama 3.1 405B is an open-weights model, which means anyone can download the trained neural network files and run them or fine-tune them. That directly challenges a business model where companies like OpenAI keep the weights to themselves and instead monetize the model through subscription wrappers like ChatGPT or charge for access by the token through an API.
Fighting the "closed" AI model is a big deal to Mark Zuckerberg, who simultaneously released a 2,300-word manifesto today on why the company believes in open releases of AI models, titled, "Open Source AI Is the Path Forward." More on the terminology in a minute. But briefly, he writes about the need for customizable AI models that offer user control and encourage better data security, higher cost-efficiency, and better future-proofing, as opposed to vendor-locked solutions.
All that sounds reasonable, but undermining your competitors using a model subsidized by a social media war chest is also an efficient way to play spoiler in a market where you might not always win with the most cutting-edge tech. That benefits Meta, Zuckerberg says, because he doesn’t want to get locked into a system where companies like his have to pay a toll to access AI capabilities, drawing comparisons to "taxes" Apple levies on developers through its App Store.

Enlarge / A screenshot of Mark Zuckerberg’s essay, "Open Source AI Is the Path Forward," published on July 23, 2024.
So about that "open source" term. As we first wrote in an update to our Llama 2 launch article a year ago, "open source" has a very particular meaning that has traditionally been defined by the Open Source Initiative. The AI industry has not yet settled on terminology for AI model releases that ship either code or weights with restrictions (such as Llama 3.1) or that ship without providing training data. We’ve been calling these releases "open weights" instead.
Unfortunately for terminology sticklers, Zuckerberg has now baked the erroneous "open source" label into the title of his potentially historic aforementioned essay on open AI releases, so fighting for the correct term in AI may be a losing battle. Still, his usage annoys people like independent AI researcher Simon Willison, who likes Zuckerberg’s essay otherwise.
"I see Zuck’s prominent misuse of ‘open source’ as a small-scale act of cultural vandalism," Willison told Ars Technica. "Open source should have an agreed meaning. Abusing the term weakens that meaning which makes the term less generally useful, because if someone says ‘it’s open source,’ that no longer tells me anything useful. I have to then dig in and figure out what they’re actually talking about."
The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.
Ars Technica – All content
Witness the rise of the Bene Gesserit in new Dune: Prophecy teaser
https://cdn.arstechnica.net/wp-content/uploads/2024/07/duneTOP-760×380.jpg
The HBO Original Series Dune: Prophecy will premiere this November.
Fans of director Denis Villeneuve’s epic two-part film adaptation of Frank Herbert’s Dune have no doubt been curious about the upcoming HBO Max series, Dune: Prophecy. It’s a prequel series inspired by the novel Sisterhood of Dune, written by Brian Herbert and Kevin J. Anderson, exploring the origins of the Bene Gesserit. The studio just dropped a tantalizing teaser rife with political intrigue, ominous warnings, and a bit of hand-to-hand combat.
The series was first announced in 2019, with Villeneuve serving as an executive producer and Alison Schapker (Alias, Fringe, Altered Carbon) serving as showrunner. The first season will consist of six episodes, and it’s unclear how closely the series will adhere to the source material. Per the official premise:
Set 10,000 years before the ascension of Paul Atreides, Dune: Prophecy follows two Harkonnen sisters as they combat forces that threaten the future of humankind, and establish the fabled sect that will become known as the Bene Gesserit.
Emily Watson co-stars as Valya Harkonnen, leader of the Sisterhood, with Olivia Williams playing her sister, Tula Harkonnen. Mark Strong plays Emperor Javicco Corrino, described as "a man from a great line of war-time Emperors, who is called upon to govern the Imperium and manage a fragile peace," while Jodhi May plays Empress Natalya and Sarah-Sofie Boussnina plays Princess Ynez.
The cast also includes Shalom Brune-Franklin as Mikaela, a Fremen woman who serves the royal family; Travis Fimmel as Desmond Hart, described as "a charismatic soldier with an enigmatic past"; Chris Mason as swordsman Keiran Atreides; Josh Heuston as Constantine Corrino, the illegitimate son of Javicco; Edward Davis as rising politician Harrow Harkonnen; Tabu as Sister Francesca, the Emperor’s former lover; Jihae as Reverend Mother Kasha, the Emperor’s Truthsayer; Faoileann Cunningham as Sister Jen, Chloe Lea as Lila, Jade Anouka as Sister Theodosia, and Aoife Hinds as Sister Emeline, all acolytes at the Sisterhood School.
Power = control
A short teaser was shown in May during the Warner Bros. Discovery Upfront presentation in New York City. It was heavy on the exposition, with a voiceover describing the founding of a sisterhood assigned to the Great Houses "to help them sift truth from lies." The result was a "network of influence throughout the Imperium… but power comes with a price." They want to place a Sister on the throne and arrange a marriage to make it possible. Not all the Sisters were on board with the plan, however, with one warning that the Sisterhood was playing god "and we will be judged for it."
This latest teaser opens with an admonition to acolytes of the Sisterhood: "You wish to serve the Great Houses and shape the flow of power; you must first exert power over yourself." The emperor seems to be easily wooed by the "sorceresses," much to his empress’s chagrin, but the more influence the Sisterhood wields, the more enemies it gains. Desmond Hart also has his suspicions about the Sisterhood, probably with good reason. "Our hands are poised on the levers of power but yet our grasp on it is still fragile," Valya tells her sister Tula, assuring her that "I am trying to protect the Imperium"—and "sacrifices must be made."
Dune: Prophecy premieres this November on Max.
Listing image by HBO Max
Ars Technica – All content
Ultimate Guide to Improving MySQL Query Performance
https://www.percona.com/blog/wp-content/uploads/2024/06/Improving-MySQL-Query-Performance-200×112.jpgMySQL is certainly a powerful open source database management system, but even the most robust engine struggles when queries take an eternity to execute. For DBAs and developers, improving MySQL query performance is an ongoing goal. Efficient query performance is crucial for ensuring the smooth operation and optimal user experience of applications powered by MySQL […]Percona Database Performance Blog
4th of July Car Launch Highlights
https://theawesomer.com/photos/2024/07/glacier_view_car_launch_2024_t.jpg
Since Alaska has so little darkness on July 4th, they’ve got an alternative for fireworks. For over 20 years, Glacier View, Alaska, has hosted an event where they toss junk cars off a 300-foot cliff while onlookers enjoy the chaos and destruction. 1320video attended this year’s festivities to give us an insider’s look at the event, including POV and aerial footage.
The Awesomer
Dropping 1000 Basketballs from an Airplane
https://theawesomer.com/photos/2024/07/dropping_basketballs_from_an_airplane_t.jpg
Not to be outdone by How Ridiculous and their soccer balls, the dudes from Dude Perfect booked a flight with the U.S. Air Force, filled a C-17 cargo plane with 1000 basketballs, and dropped them all to see if they could score a basket or two. Before shooting hoops, they played the world’s largest game of darts and the opposite of miniature golf.
The Awesomer
A couple of Amish ballers rolled up to an inner-city park in Indiana and gave the locals a run for their money
https://media.notthebee.com/articles/66902bd48ca4866902bd48ca49.jpg
Well, this ain’t something you see every day!
Not the Bee
AWS App Studio Promises To Generate Enterprise Apps From a Written Prompt
Amazon Web Services is the latest entrant to the generative AI game with the announcement of App Studio, a groundbreaking tool capable of building complex software applications from simple written prompts. TechCrunch’s Ron Miller reports: "App Studio is for technical folks who have technical expertise but are not professional developers, and we’re enabling them to build enterprise-grade apps," Sriram Devanathan, GM of Amazon Q Apps and AWS App Studio, told TechCrunch. Amazon defines enterprise apps as having multiple UI pages with the ability to pull from multiple data sources, perform complex operations like joins and filters, and embed business logic in them. It is aimed at IT professionals, data engineers and enterprise architects, even product managers who might lack coding skills but have the requisite company knowledge to understand what kinds of internal software applications they might need. The company is hoping to enable these employees to build applications by describing the application they need and the data sources they wish to use.
Examples of the types of applications include an inventory-tracking system or claims approval process. The user starts by entering the name of an application, calling the data sources and then describing the application they want to build. The system comes with some sample prompts to help, but users can enter an ad hoc description if they wish. It then builds a list of requirements for the application and what it will do, based on the description. The user can refine these requirements by interacting with the generative AI. In that way, it’s not unlike a lot of no-code tools that preceded it, but Devanathan says it is different. […] Once the application is complete, it goes through a mini DevOps pipeline where it can be tested before going into production. In terms of identity, security and governance, and other requirements any enterprise would have for applications being deployed, the administrator can link to existing systems when setting up the App Studio. When it gets deployed, AWS handles all of that on the back end for the customer, based on the information entered by the admin.
Read more of this story at Slashdot.
Slashdot
Database Viewer Package
https://repository-images.githubusercontent.com/782900826/1a3120f7-e5b3-4a6e-8257-bc7f3a20b722
Documentation |
Features |
Installation |
Troubleshooting |
Credits
Documentation can be found on the official website.
To install the package via composer, Run:
composer require nextbuild/database-viewer
After installing the package, publish the front-end assets by running:
php artisan database-viewer:publish
Publish config file by running:
php artisan vendor:publish --tag=database-viewer-config
Once the installation is complete, you will be able to access Database Viewer directly in your browser.
By default, the application is available at: {APP_URL}/database-viewer
.
(for example: https://my-app.test/database-viewer
)
Here are some common problems and solutions.
Please review our security policy on how to report security vulnerabilities.
The MIT License (MIT). Please see License File for more information.
Laravel News Links
10 Billion Passwords Leaked in the Largest Compilation of All Time
An anonymous reader shares a report: Cybernews researchers discovered what appears to be the largest password compilation with a staggering 9,948,575,739 unique plaintext passwords. The file with the data, titled rockyou2024.txt, was posted on July 4th by forum user ObamaCare. While the user registered in late May 2024, they have previously shared an employee database from the law firm Simmons & Simmons, a lead from an online casino AskGamblers, and student applications for Rowan College at Burlington County. The team cross-referenced the passwords included in the RockYou2024 leak with data from Cybernews’ Leaked Password Checker, which revealed that these passwords came from a mix of old and new data breaches. "In its essence, the RockYou2024 leak is a compilation of real-world passwords used by individuals all over the world. Revealing that many passwords for threat actors substantially heightens the risk of credential stuffing attacks," researchers said.
Read more of this story at Slashdot.
Slashdot