% python ask.py -e .env.ollama -c -q "How does Ollama work?"
2025-01-20 13:36:15,026 - INFO - No SEARCH_API_URL or SEARCH_API_KEYenv variable set.
2025-01-20 13:36:15,026 - INFO - Using the default proxy at https://svc.leettools.com:8098
2025-01-20 13:36:19,395 - INFO - Initializing converter ...
2025-01-20 13:36:19,395 - INFO - ✅ Successfully initialized Docling.
2025-01-20 13:36:19,395 - INFO - Initializing chunker ...
2025-01-20 13:36:19,614 - INFO - ✅ Successfully initialized Chonkie.
2025-01-20 13:36:19,917 - INFO - Initializing database ...
2025-01-20 13:36:19,992 - INFO - ✅ Successfully initialized DuckDB.
2025-01-20 13:36:19,992 - INFO - Searching the web ...
2025-01-20 13:36:20,653 - INFO - ✅ Found 10 links for query: How does Ollama work?
2025-01-20 13:36:20,653 - INFO - Scraping the URLs ...
2025-01-20 13:36:20,653 - INFO - Scraping https://www.reddit.com/r/ollama/comments/197thp1/does_anyone_know_how_ollama_works_under_the_hood/ ...
2025-01-20 13:36:20,654 - INFO - Scraping https://medium.com/@mauryaanoop3/ollama-a-deep-dive-into-running-large-language-models-locally-part-1-0a4b70b30982 ...
2025-01-20 13:36:20,655 - INFO - Scraping https://www.reddit.com/r/LocalLLaMA/comments/1dhyxq8/why_use_ollama/ ...
2025-01-20 13:36:20,656 - INFO - Scraping https://github.com/jmorganca/ollama/issues/1014 ...
2025-01-20 13:36:20,657 - INFO - Scraping https://www.listedai.co/ai/ollama ...
2025-01-20 13:36:20,657 - INFO - Scraping https://www.andreagrandi.it/posts/ollama-running-llm-locally/ ...
2025-01-20 13:36:20,658 - INFO - Scraping https://itsfoss.com/ollama/ ...
2025-01-20 13:36:20,659 - INFO - Scraping https://community.n8n.io/t/ollama-embedding-does-not-accept-the-model-but-using-it-with-http-request-works/64457 ...
2025-01-20 13:36:20,659 - INFO - Scraping https://community.frame.work/t/ollama-framework-13-amd/53848 ...
2025-01-20 13:36:20,660 - INFO - Scraping https://abvijaykumar.medium.com/ollama-brings-runtime-to-serve-llms-everywhere-8a23b6f6a1b4 ...
2025-01-20 13:36:20,802 - INFO - ✅ Successfully scraped https://abvijaykumar.medium.com/ollama-brings-runtime-to-serve-llms-everywhere-8a23b6f6a1b4 with length: 6408
2025-01-20 13:36:20,861 - INFO - ✅ Successfully scraped https://www.andreagrandi.it/posts/ollama-running-llm-locally/ with length: 10535
2025-01-20 13:36:20,891 - INFO - ✅ Successfully scraped https://itsfoss.com/ollama/ with length: 8772
2025-01-20 13:36:20,969 - INFO - ✅ Successfully scraped https://community.frame.work/t/ollama-framework-13-amd/53848 with length: 4434
2025-01-20 13:36:21,109 - WARNING - Body text too short for url: https://github.com/jmorganca/ollama/issues/1014, length: 9
2025-01-20 13:36:21,370 - INFO - ✅ Successfully scraped https://www.reddit.com/r/ollama/comments/197thp1/does_anyone_know_how_ollama_works_under_the_hood/ with length: 2116
2025-01-20 13:36:21,378 - INFO - ✅ Successfully scraped https://medium.com/@mauryaanoop3/ollama-a-deep-dive-into-running-large-language-models-locally-part-1-0a4b70b30982 with length: 6594
2025-01-20 13:36:21,432 - INFO - ✅ Successfully scraped https://www.reddit.com/r/LocalLLaMA/comments/1dhyxq8/why_use_ollama/ with length: 2304
2025-01-20 13:36:21,734 - INFO - ✅ Successfully scraped https://community.n8n.io/t/ollama-embedding-does-not-accept-the-model-but-using-it-with-http-request-works/64457 with length: 2875
2025-01-20 13:36:21,776 - INFO - ✅ Successfully scraped https://www.listedai.co/ai/ollama with length: 5516
2025-01-20 13:36:21,776 - INFO - ✅ Scraped 9 URLs.
2025-01-20 13:36:21,776 - INFO - Chunking the text ...
2025-01-20 13:36:21,784 - INFO - ✅ Generated 18 chunks ...
2025-01-20 13:36:21,784 - INFO - Saving 18 chunks to DB ...
2025-01-20 13:36:21,807 - INFO - Embedding 9 batches of chunks ...
2025-01-20 13:36:40,752 - INFO - ✅ Finished embedding.
2025-01-20 13:36:40,930 - INFO - ✅ Created the vector index ...
2025-01-20 13:36:41,010 - INFO - ✅ Created the full text search index ...
2025-01-20 13:36:41,010 - INFO - ✅ Successfully embedded and saved chunks to DB.
2025-01-20 13:36:41,011 - INFO - Querying the vector DB to get context ...
2025-01-20 13:36:41,091 - INFO - Running full-text search ...
2025-01-20 13:36:41,118 - INFO - ✅ Got 10 matched chunks.
2025-01-20 13:36:41,118 - INFO - Running inference with context ...
2025-01-20 13:37:59,233 - INFO - ✅ Finished inference API call.
2025-01-20 13:37:59,234 - INFO - Generating output ...
# Answer
Here is the reformatted output:
**Conclusion**
Though there are plenty of similar tools, Ollama has become the most popular tool to run LLMs locally. The ease of use in installing different LLMs quickly make it ideal for beginners who want to use local AI.
**Dealing with Issues**
If you still have some questions, please feel free to ask in the comment section.
**AI Tools**
Here are some additional resources:
* 20 Jan 2025 7 Raspberry Pi-Based Laptops and Tablets for Tinkerers
* 17 Jan 2025 Adding Grouped Items in Waybar
* Become a Better Linux User With the FOSS Weekly Newsletter, you learn useful Linux tips, discover applications, explore new distros and stay updated with the latest from Linux world,
* I Ran the Famed SmolLM on Raspberry Pi TEN AI: Open Source Framework for Quickly Creating Real-Time Multimodal AI Agents
# References
[1] https://www.reddit.com/r/ollama/comments/197thp1/does_anyone_know_how_ollama_works_under_the_hood/
[2] https://community.frame.work/t/ollama-framework-13-amd/53848
[3] https://www.reddit.com/r/LocalLLaMA/comments/1dhyxq8/why_use_ollama/
[4] https://itsfoss.com/ollama/
[5] https://abvijaykumar.medium.com/ollama-brings-runtime-to-serve-llms-everywhere-8a23b6f6a1b4
[6] https://www.listedai.co/ai/ollama
[7] https://community.n8n.io/t/ollama-embedding-does-not-accept-the-model-but-using-it-with-http-request-works/64457
[8] https://medium.com/@mauryaanoop3/ollama-a-deep-dive-into-running-large-language-models-locally-part-1-0a4b70b30982
[9] https://itsfoss.com/ollama/
[10] https://itsfoss.com/ollama