Hi friends! 👋
In this tutorial, you’ll create a super simple website where users can type a question and get a response from an AI — just like ChatGPT, but using Meta’s LLaMA model (an open-source AI model) through Hugging Face.
You’ll learn how to:
- ✅ Use an AI model
- ✅ Create a basic web page
- ✅ Connect it all with Python code
🧠 What Are These Tools?
Before we start coding, let’s understand the tools we’re using:
🟣 Hugging Face
Think of this as a giant library of powerful AI models (like LLaMA, GPT, etc.) that you can use in your own apps. It’s like the Netflix of AI models — just log in, choose a model, and go!
🟠 Flask (Python)
This is a mini program that turns your Python code into something that can be used on a website. It’s like the brain behind the scenes — it listens to the user’s question, sends it to the AI, and gives back the answer.
✅ Step-by-Step Tutorial
🔧 Step 1: Open Google Colab
Go to Google Colab and open a new notebook.
📦 Step 2: Install the Tools We Need
!pip install flask flask-ngrok transformers torch pyngrok
🔐 Step 3: Log Into Hugging Face
- Go to https://huggingface.co
- Sign up/log in → Go to Settings > Access Tokens
- Copy your token and paste it into this code:
from huggingface_hub import login
HUGGING_FACE_KEY = "paste-your-key-here"
login(HUGGING_FACE_KEY)
🤖 Step 4: Load Meta’s LLaMA Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "meta-llama/Llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
🧠 Step 5: Let’s Build the Backend (The Brain Behind the Website)
What’s the backend? It’s like the chef in a restaurant. The user places an order (asks a question), and the backend prepares the response using AI.
from flask import Flask, request, jsonify
from flask_ngrok import run_with_ngrok
app = Flask(__name__)
run_with_ngrok(app)
@app.route("/ask", methods=["POST"])
def ask():
prompt = request.json.get("prompt", "")
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return jsonify({"response": response})
🌐 Step 6: Create the Web Page (Frontend)
Now, let’s build the frontend — the part that people see and interact with.
%%writefile index.html
<!DOCTYPE html>
<html>
<head><title>Ask AI</title></head>
<body>
<h2>Ask the AI</h2>
<textarea id="prompt" rows="4" cols="50" placeholder="Type your question here..."></textarea><br>
<button onclick="askAI()">Ask</button>
<p><strong>Response:</strong> <span id="response"></span></p>
<script>
async function askAI() {
const prompt = document.getElementById("prompt").value;
const res = await fetch("/ask", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt: prompt })
});
const data = await res.json();
document.getElementById("response").innerText = data.response;
}
</script>
</body>
</html>
🧩 Step 7: Connect It All and Run the App
from flask import send_from_directory
@app.route('/')
def serve_frontend():
return send_from_directory('.', 'index.html')
app.run()
You'll get a link like this: https://xxxxx.ngrok.io
Click it and... 🎉 Your AI-powered website is LIVE!
🏁 Final Words
You just built:
- A working AI assistant using Meta's LLaMA model
- A custom web page
- A backend in Python to power it
You're officially coding with real AI models. That’s amazing. Welcome to the world of AI development!
No comments:
Post a Comment