Build Web Apps and connect LLM's & SLM's locally using Ollama and LangChain

Video ID: mNKOKnSbG4Q

YouTube URL: https://www.youtube.com/watch?v=mNKOKnSbG4Q

Added At: 13-06-25 21:18:51

Processed: No

Sentiment: Neutral

Categories: Education, Tech

Tags: AI, LLaMA, Langchain, Flask, Python, Web App, Image Detection, Natural Language Processing

Summary

• The speaker shares a Flask web app they created during the NYC AI Summit.
• The app uses LLaMA and Langchain to connect to a locally hosted large language model or small language model.
• The app can respond to text prompts, including asking for information about New York City, and can also use image detection with a vision model.

Transcript

hey everyone uh I wanted to share uh the
flask web uh web app that I created
during my session at NYC AI Summit last
week uh it shows you how you can create
uh multimodel uh applications using L
AMA and langin uh in order to connect to
a locally hosted uh large language model
or small language model so here's a
quick demo it's a single page uh web app
uh that is built using flask and I'm
using SQL light DB to store um all these
conversation threads so um here if I
simply give
some instructions I can say uh can you
tell
me uh about New York City in six
sentences
so um you can see I'm
using uh fi model uh which is a small
language model um and you can see it's
responding with very accurate uh
response here uh same way I can
also use um images so let's see if
I want to look
at here this image here and I can say
describe the
image just to show you what that image
is uh me open it
here so it's an image of a kid riding
his
bike let's see if it can detect
that correctly uh so I do have a vision
model I'm using lava for uh image
detection so if the user uploads the
image it uses lava uh if the user uh
just
uh responds using text then it uses Pi
model so you can see here the model is
able to detect the image correctly it
says in the image there's a young boy
riding a small bike it appears to be in
motion so you can see the accuracy is
pretty good um and like I said uh I have
two models that are locally hosted on my
machine uh if there's a image uh
detected as part of the prompt it uses
lava uh in order to Rasen over uh image
that is uploaded by the user um so I did
upload this uh samp Le to my GitHub page
um I can quickly show you
the code here in the back end so we have
app.py that's the main file um these are
all the libraries that you need um you
can see I'm using SQL light uh using
Alchemy to write to SQL light um and
then in terms of uh the model uh if you
open llmp
uh here's the logic uh for calling the
models that I have so I have the pi
model uh you can see I'm using the Lang
chain Library here um it's calling the
pi model uh in order to get the response
and then if there is a image uh uploaded
by the user then it's going to use the
lava
model so the code is here for to try it
out uh and for those of you who uh don't
know what uh AMA is it's an um AI tool
that's designed to enable users to um
execute large language models like llama
uh locally on their machine or locally
on the server um they also have a lang
chain Library so it's it's really easy
to use um if you want to add uh
AI capabilities to your existing uh
applications or web
applications uh here's the site you can
see all the models that
are available on AMA so you can U
download any of these uh and use it uh
as part of your application um so you
can see the two I have
here uh I have the lava which does the
image detection it's a vision model and
then I have uh five which is just 1.6
GB um do let me know in your comments if
you have any questions on this uh thanks
for watching