The Discussion:
We had a pleasant conversation about projects we are working on, including uses for older Raspberry Pis now that the newer/better models have arrived.
Gib showed us a quick demo of ITX Studio. It can take a short description and build a story line which is then used to create videos. He uploaded a photo of a ’66 Mustang, gave a description about a car in a race, and selected a sci-fi theme. The resulting story line included several scenes, each with their own description, which could be further embellished. We had a good laugh when the driver put his hand on the shifter, and the video showed the driver with two normal hands and one disembodied hand embracing the shifter!
Carl talked about the latest LLM from Microsoft. The phi4 model is an update to phi3, and it provided some clear and practical responses. While he likes llama3.2 and lama3.2-vision, phi4 has become his new “go to” model.
Pat talked about running an LLM that could use local documents (such as pdfs, ebooks, text documents, etc.) as a source. There is a new model from Google named NotebookLM that can do this, but it cannot be run locally without connecting to Google’s servers. We’ll be watching for updates from Open Notebook which can be run on a local instance of ollama.
Gib was saying that it would be good if there was a website that used AI to collect news stories from various sources and then show the political slant of each article. And Carl said he had recently discovered Ground News which does just that. It provides a quick and easy way to see the headlines and who is covering them.
RichardTheTechMan asked about running ollama on older hardware. For best results a supported Nvidia GPU is a must. Acceptable results can be expected if there is a minimum of 32 GB of ram and fast storage. Anything less would be painfully slow if it works at all.
RTTM also plugged SEMCO and a presentation by Carl on the topic “Protect yourself from the Internet slime.”