<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Tue, 14 Apr 2026 18:24:33 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>LINUX Unplugged - Episodes Tagged with “Streetcomplete”</title>
    <link>https://linuxunplugged.com/tags/streetcomplete</link>
    <pubDate>Sun, 27 Aug 2023 19:45:00 -0700</pubDate>
    <description>An open show powered by community LINUX Unplugged takes the best attributes of open collaboration and turns it into a weekly show about Linux.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Weekly Linux talk show with no script, no limits, surprise guests and tons of opinion.</itunes:subtitle>
    <itunes:author>Jupiter Broadcasting</itunes:author>
    <itunes:summary>An open show powered by community LINUX Unplugged takes the best attributes of open collaboration and turns it into a weekly show about Linux.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/f/f31a453c-fa15-491f-8618-3f71f1d565e5/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Jupiter Broadcasting</itunes:name>
      <itunes:email>chris@jupiterbroadcasting.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<itunes:category text="News">
  <itunes:category text="Tech News"/>
</itunes:category>
<item>
  <title>525: Beating Apple to the Sauce</title>
  <link>https://linuxunplugged.com/525</link>
  <guid isPermaLink="false">3b6e8589-19d1-4f16-893d-1dc3bce41ab1</guid>
  <pubDate>Sun, 27 Aug 2023 19:45:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/f31a453c-fa15-491f-8618-3f71f1d565e5/3b6e8589-19d1-4f16-893d-1dc3bce41ab1.mp3" length="60772071" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives.</itunes:subtitle>
  <itunes:duration>1:12:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/f/f31a453c-fa15-491f-8618-3f71f1d565e5/cover.jpg?v=3"/>
  <description>We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives. Special Guest: Neal Gompa.
</description>
  <itunes:keywords>Jupiter Broadcasting, Linux Podcast, Linux Unplugged, 🦙, Hector Martin, telemetry, Asahi Linux, Fedora, Fedora Asahi Remix, Arm, Apple Silicon, ARM64, macOS, Apple, Arch ARM, Neal Gompa, Davide Calvalca, Gallium3D, OpenGl ES 3.1, GPU, M1, M2, conformant GPU driver, Alyssa Rosenzweig, dual booting, UEFI, thunderbolt, Plasma, GPU acceleration, battery life, KDE, 16k pages, 16k kernel, Mac Mini, SIP, VoIP, Jitsi Meet, Mattermost, XFS, HPC, JBOD, xfs_repair, filesystem, data loss, server temperature, data center, NixOS, RDP, VNC, immutability, impermanence, ZFS, Btrfs, LUKS, OpenStreetMap, StreetComplete, LVM, disk encryption, Organic Maps, openSUSE Tumbleweed, OnePlus 6, Snapdragon 845, KDE Connect, Llama 2, Meta, OpenAI, ChatGPT, LLM, llama-gpt, llama.cpp, AI, ML, Umbrel, self-hosting, open source AI,</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives.</p><p>Special Guest: Neal Gompa.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://tailscale.com/linuxunplugged">Tailscale</a>: <a rel="nofollow" href="http://tailscale.com/linuxunplugged">Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!</a></li><li><a rel="nofollow" href="https://linode.com/unplugged">Linode Cloud Hosting</a>: <a rel="nofollow" href="https://linode.com/unplugged">A special offer for all Linux Unplugged Podcast listeners and new Linode customers, visit linode.com/unplugged, and receive $100 towards your new account. </a></li><li><a rel="nofollow" href="https://1password.com/unplugged">1Password Extended Access Management</a>: <a rel="nofollow" href="https://1password.com/unplugged">Secure every sign-in for every app on every device.</a></li></ul><p><a rel="payment" href="https://jupitersignal.memberful.com/checkout?plan=52946">Support LINUX Unplugged</a></p><p>Links:</p><ul><li><a title="🎉 Alby" rel="nofollow" href="https://getalby.com/">🎉 Alby</a> &mdash; Boost into the show, first grab Alby, top it off, and then head over to the Podcast Index.</li><li><a title="⚡️ LINUX Unplugged on the Podcastindex.org" rel="nofollow" href="https://podcastindex.org/podcast/575694">⚡️ LINUX Unplugged on the Podcastindex.org</a> &mdash; You can boost from the web. Once Alby is topped off, visit our page on the Podcast Index.</li><li><a title="Hector Martin&#39;s Controversial Question" rel="nofollow" href="https://social.treehouse.systems/@marcan/110837288605832455">Hector Martin's Controversial Question</a> &mdash; Would you be okay with us adding some really trivial telemetry to the Asahi installer?</li><li><a title="Berlin with Brent" rel="nofollow" href="https://www.meetup.com/jupiterbroadcasting/events/295135448/">Berlin with Brent</a> &mdash; Brent will be back in Berlin for the Nextcloud Conference and can't get enough of Berlin Meetups! Friday, September 8th, 6 PM.</li><li><a title="Fedora Asahi Remix" rel="nofollow" href="https://fedora-asahi-remix.org/">Fedora Asahi Remix</a></li><li><a title="Fedora Asahi Remix Coming For Fedora Linux On Apple Silicon Hardware" rel="nofollow" href="https://www.phoronix.com/news/Fedora-Asahi-Remix-Coming">Fedora Asahi Remix Coming For Fedora Linux On Apple Silicon Hardware</a> &mdash; Fedora Asahi Remix will be their new flagship distribution for providing a polished Linux experience on Apple Silicon.</li><li><a title="Fedora Asahi Remix: bringing Fedora to Apple Silicon Macs (Flock To Fedora 2023)" rel="nofollow" href="https://www.youtube.com/watch?v=bD2R4Yt8m88">Fedora Asahi Remix: bringing Fedora to Apple Silicon Macs (Flock To Fedora 2023)</a></li><li><a title="Our new flagship distro: Fedora Asahi Remix" rel="nofollow" href="https://asahilinux.org/2023/08/fedora-asahi-remix/">Our new flagship distro: Fedora Asahi Remix</a> &mdash; We’re still working out the kinks and making things even better, so we are not quite ready to call this a release yet. We aim to officially release the Fedora Asahi Remix by the end of August 2023. Look forward to many new features, machine support, and more!</li><li><a title="Hector Martin: “Okay, I’m going to be honest…”" rel="nofollow" href="https://social.treehouse.systems/@marcan/109971521711413167">Hector Martin: “Okay, I’m going to be honest…”</a> &mdash; I apologize to all Asahi Linux users. You deserve better. When I chose Arch Linux ARM as a base I didn't realize it would have so many basic QA issues.</li><li><a title="Coming soon: Fedora for Apple Silicon Macs! (Fedora Discourse)" rel="nofollow" href="https://discussion.fedoraproject.org/t/coming-soon-fedora-for-apple-silicon-macs/86745">Coming soon: Fedora for Apple Silicon Macs! (Fedora Discourse)</a></li><li><a title="The first conformant M1 GPU driver" rel="nofollow" href="https://rosenzweig.io/blog/first-conformant-m1-gpu-driver.html">The first conformant M1 GPU driver</a> &mdash; Our reverse-engineered, free and open source graphics drivers are the world’s only conformant OpenGL ES 3.1 implementation for M1- and M2-family graphics hardware. That means our driver passed tens of thousands of tests to demonstrate correctness and is now recognized by the industry.</li><li><a title="Asahi Linux’s Apple M1/M2 Gallium3D Driver Now OpenGL ES 3.1 Conformant" rel="nofollow" href="https://www.phoronix.com/news/Asahi-Linux-GLES-3.1-AGX-M1-M2">Asahi Linux’s Apple M1/M2 Gallium3D Driver Now OpenGL ES 3.1 Conformant</a> &mdash; It's even more rewarding for the community developers in that Apple doesn't provide any conformant (OpenGL or Vulkan) graphics drivers for their Arm-based platform.</li><li><a title="Feature Support · AsahiLinux/docs Wiki" rel="nofollow" href="https://github.com/AsahiLinux/docs/wiki/Feature-Support">Feature Support · AsahiLinux/docs Wiki</a></li><li><a title="Switch to the kernel-16k variant - Fedora Discussion" rel="nofollow" href="https://discussion.fedoraproject.org/t/switch-to-the-kernel-16k-variant/87711">Switch to the kernel-16k variant - Fedora Discussion</a></li><li><a title="NixOS: Unlocking your LUKS via SSH and Tor" rel="nofollow" href="https://nixos.wiki/wiki/Remote_LUKS_Unlocking">NixOS: Unlocking your LUKS via SSH and Tor</a></li><li><a title="StreetComplete" rel="nofollow" href="https://github.com/streetcomplete/StreetComplete">StreetComplete</a> &mdash; Easy to use OpenStreetMap editor for Android.</li><li><a title="getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device." rel="nofollow" href="https://github.com/getumbrel/llama-gpt">getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.</a></li><li><a title="serge-chat/serge: A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API." rel="nofollow" href="https://github.com/serge-chat/serge">serge-chat/serge: A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.</a></li><li><a title="liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use" rel="nofollow" href="https://github.com/liltom-eth/llama2-webui">liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use</a></li><li><a title="llama.cpp" rel="nofollow" href="https://github.com/ggerganov/llama.cpp">llama.cpp</a> &mdash; Port of Facebook’s LLaMA model in C/C++</li><li><a title="Llama2.c" rel="nofollow" href="https://github.com/karpathy/llama2.c">Llama2.c</a> &mdash; Inference Llama 2 in one file of pure C</li><li><a title="Koboldcpp" rel="nofollow" href="https://github.com/LostRuins/koboldcpp">Koboldcpp</a> &mdash; A simple one-file way to run various GGML models with KoboldAI’s UI</li><li><a title="lollms-webui" rel="nofollow" href="https://github.com/ParisNeo/lollms-webui">lollms-webui</a> &mdash; Lord of Large Language Models Web User Interface</li><li><a title="LM Studio" rel="nofollow" href="https://lmstudio.ai/">LM Studio</a> &mdash; Discover, download, and run local LLMs</li><li><a title="text-generation-webui" rel="nofollow" href="https://github.com/oobabooga/text-generation-webui">text-generation-webui</a> &mdash; A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (ggml/gguf), Llama models.</li><li><a title="A comprehensive guide to running Llama 2 locally" rel="nofollow" href="https://replicate.com/blog/run-llama-locally">A comprehensive guide to running Llama 2 locally</a> &mdash; Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.</li><li><a title="Meta Releases Code Llama, a Coding Version of Llama 2" rel="nofollow" href="https://www.wired.com/story/meta-code-llama/">Meta Releases Code Llama, a Coding Version of Llama 2</a></li><li><a title="Introducing Code Llama, a state-of-the-art large language model for coding" rel="nofollow" href="https://ai.meta.com/blog/code-llama-large-language-model-coding/">Introducing Code Llama, a state-of-the-art large language model for coding</a></li><li><a title="Llama and ChatGPT Are Not Open-Source" rel="nofollow" href="https://spectrum.ieee.org/open-source-llm-not-open">Llama and ChatGPT Are Not Open-Source</a></li><li><a title="Meta launches Llama 2, a source-available AI model that allows commercial applications" rel="nofollow" href="https://arstechnica.com/information-technology/2023/07/meta-launches-llama-2-an-open-source-ai-model-that-allows-commercial-applications/">Meta launches Llama 2, a source-available AI model that allows commercial applications</a> &mdash; A family of pretrained and fine-tuned language models in sizes from 7 to 70 billion parameters.</li><li><a title="Meta’s Llama 2 is not open source" rel="nofollow" href="https://www.theregister.com/2023/07/21/llama_is_not_open_source/">Meta’s Llama 2 is not open source</a> &mdash; Meta's newly released large language model Llama 2 is not open source.</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives.</p><p>Special Guest: Neal Gompa.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://tailscale.com/linuxunplugged">Tailscale</a>: <a rel="nofollow" href="http://tailscale.com/linuxunplugged">Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!</a></li><li><a rel="nofollow" href="https://linode.com/unplugged">Linode Cloud Hosting</a>: <a rel="nofollow" href="https://linode.com/unplugged">A special offer for all Linux Unplugged Podcast listeners and new Linode customers, visit linode.com/unplugged, and receive $100 towards your new account. </a></li><li><a rel="nofollow" href="https://1password.com/unplugged">1Password Extended Access Management</a>: <a rel="nofollow" href="https://1password.com/unplugged">Secure every sign-in for every app on every device.</a></li></ul><p><a rel="payment" href="https://jupitersignal.memberful.com/checkout?plan=52946">Support LINUX Unplugged</a></p><p>Links:</p><ul><li><a title="🎉 Alby" rel="nofollow" href="https://getalby.com/">🎉 Alby</a> &mdash; Boost into the show, first grab Alby, top it off, and then head over to the Podcast Index.</li><li><a title="⚡️ LINUX Unplugged on the Podcastindex.org" rel="nofollow" href="https://podcastindex.org/podcast/575694">⚡️ LINUX Unplugged on the Podcastindex.org</a> &mdash; You can boost from the web. Once Alby is topped off, visit our page on the Podcast Index.</li><li><a title="Hector Martin&#39;s Controversial Question" rel="nofollow" href="https://social.treehouse.systems/@marcan/110837288605832455">Hector Martin's Controversial Question</a> &mdash; Would you be okay with us adding some really trivial telemetry to the Asahi installer?</li><li><a title="Berlin with Brent" rel="nofollow" href="https://www.meetup.com/jupiterbroadcasting/events/295135448/">Berlin with Brent</a> &mdash; Brent will be back in Berlin for the Nextcloud Conference and can't get enough of Berlin Meetups! Friday, September 8th, 6 PM.</li><li><a title="Fedora Asahi Remix" rel="nofollow" href="https://fedora-asahi-remix.org/">Fedora Asahi Remix</a></li><li><a title="Fedora Asahi Remix Coming For Fedora Linux On Apple Silicon Hardware" rel="nofollow" href="https://www.phoronix.com/news/Fedora-Asahi-Remix-Coming">Fedora Asahi Remix Coming For Fedora Linux On Apple Silicon Hardware</a> &mdash; Fedora Asahi Remix will be their new flagship distribution for providing a polished Linux experience on Apple Silicon.</li><li><a title="Fedora Asahi Remix: bringing Fedora to Apple Silicon Macs (Flock To Fedora 2023)" rel="nofollow" href="https://www.youtube.com/watch?v=bD2R4Yt8m88">Fedora Asahi Remix: bringing Fedora to Apple Silicon Macs (Flock To Fedora 2023)</a></li><li><a title="Our new flagship distro: Fedora Asahi Remix" rel="nofollow" href="https://asahilinux.org/2023/08/fedora-asahi-remix/">Our new flagship distro: Fedora Asahi Remix</a> &mdash; We’re still working out the kinks and making things even better, so we are not quite ready to call this a release yet. We aim to officially release the Fedora Asahi Remix by the end of August 2023. Look forward to many new features, machine support, and more!</li><li><a title="Hector Martin: “Okay, I’m going to be honest…”" rel="nofollow" href="https://social.treehouse.systems/@marcan/109971521711413167">Hector Martin: “Okay, I’m going to be honest…”</a> &mdash; I apologize to all Asahi Linux users. You deserve better. When I chose Arch Linux ARM as a base I didn't realize it would have so many basic QA issues.</li><li><a title="Coming soon: Fedora for Apple Silicon Macs! (Fedora Discourse)" rel="nofollow" href="https://discussion.fedoraproject.org/t/coming-soon-fedora-for-apple-silicon-macs/86745">Coming soon: Fedora for Apple Silicon Macs! (Fedora Discourse)</a></li><li><a title="The first conformant M1 GPU driver" rel="nofollow" href="https://rosenzweig.io/blog/first-conformant-m1-gpu-driver.html">The first conformant M1 GPU driver</a> &mdash; Our reverse-engineered, free and open source graphics drivers are the world’s only conformant OpenGL ES 3.1 implementation for M1- and M2-family graphics hardware. That means our driver passed tens of thousands of tests to demonstrate correctness and is now recognized by the industry.</li><li><a title="Asahi Linux’s Apple M1/M2 Gallium3D Driver Now OpenGL ES 3.1 Conformant" rel="nofollow" href="https://www.phoronix.com/news/Asahi-Linux-GLES-3.1-AGX-M1-M2">Asahi Linux’s Apple M1/M2 Gallium3D Driver Now OpenGL ES 3.1 Conformant</a> &mdash; It's even more rewarding for the community developers in that Apple doesn't provide any conformant (OpenGL or Vulkan) graphics drivers for their Arm-based platform.</li><li><a title="Feature Support · AsahiLinux/docs Wiki" rel="nofollow" href="https://github.com/AsahiLinux/docs/wiki/Feature-Support">Feature Support · AsahiLinux/docs Wiki</a></li><li><a title="Switch to the kernel-16k variant - Fedora Discussion" rel="nofollow" href="https://discussion.fedoraproject.org/t/switch-to-the-kernel-16k-variant/87711">Switch to the kernel-16k variant - Fedora Discussion</a></li><li><a title="NixOS: Unlocking your LUKS via SSH and Tor" rel="nofollow" href="https://nixos.wiki/wiki/Remote_LUKS_Unlocking">NixOS: Unlocking your LUKS via SSH and Tor</a></li><li><a title="StreetComplete" rel="nofollow" href="https://github.com/streetcomplete/StreetComplete">StreetComplete</a> &mdash; Easy to use OpenStreetMap editor for Android.</li><li><a title="getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device." rel="nofollow" href="https://github.com/getumbrel/llama-gpt">getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.</a></li><li><a title="serge-chat/serge: A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API." rel="nofollow" href="https://github.com/serge-chat/serge">serge-chat/serge: A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.</a></li><li><a title="liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use" rel="nofollow" href="https://github.com/liltom-eth/llama2-webui">liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use</a></li><li><a title="llama.cpp" rel="nofollow" href="https://github.com/ggerganov/llama.cpp">llama.cpp</a> &mdash; Port of Facebook’s LLaMA model in C/C++</li><li><a title="Llama2.c" rel="nofollow" href="https://github.com/karpathy/llama2.c">Llama2.c</a> &mdash; Inference Llama 2 in one file of pure C</li><li><a title="Koboldcpp" rel="nofollow" href="https://github.com/LostRuins/koboldcpp">Koboldcpp</a> &mdash; A simple one-file way to run various GGML models with KoboldAI’s UI</li><li><a title="lollms-webui" rel="nofollow" href="https://github.com/ParisNeo/lollms-webui">lollms-webui</a> &mdash; Lord of Large Language Models Web User Interface</li><li><a title="LM Studio" rel="nofollow" href="https://lmstudio.ai/">LM Studio</a> &mdash; Discover, download, and run local LLMs</li><li><a title="text-generation-webui" rel="nofollow" href="https://github.com/oobabooga/text-generation-webui">text-generation-webui</a> &mdash; A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (ggml/gguf), Llama models.</li><li><a title="A comprehensive guide to running Llama 2 locally" rel="nofollow" href="https://replicate.com/blog/run-llama-locally">A comprehensive guide to running Llama 2 locally</a> &mdash; Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.</li><li><a title="Meta Releases Code Llama, a Coding Version of Llama 2" rel="nofollow" href="https://www.wired.com/story/meta-code-llama/">Meta Releases Code Llama, a Coding Version of Llama 2</a></li><li><a title="Introducing Code Llama, a state-of-the-art large language model for coding" rel="nofollow" href="https://ai.meta.com/blog/code-llama-large-language-model-coding/">Introducing Code Llama, a state-of-the-art large language model for coding</a></li><li><a title="Llama and ChatGPT Are Not Open-Source" rel="nofollow" href="https://spectrum.ieee.org/open-source-llm-not-open">Llama and ChatGPT Are Not Open-Source</a></li><li><a title="Meta launches Llama 2, a source-available AI model that allows commercial applications" rel="nofollow" href="https://arstechnica.com/information-technology/2023/07/meta-launches-llama-2-an-open-source-ai-model-that-allows-commercial-applications/">Meta launches Llama 2, a source-available AI model that allows commercial applications</a> &mdash; A family of pretrained and fine-tuned language models in sizes from 7 to 70 billion parameters.</li><li><a title="Meta’s Llama 2 is not open source" rel="nofollow" href="https://www.theregister.com/2023/07/21/llama_is_not_open_source/">Meta’s Llama 2 is not open source</a> &mdash; Meta's newly released large language model Llama 2 is not open source.</li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
