Monday, December 30, 2024

Using AI for Peace-Building with IFIT

On the last Monday of the year, I had the privilege of collaborating with my lab to deliver a mini-course on prompt engineering for the Institute for Integrated Transitions (IFIT). IFIT’s remarkable work supports fragile and conflict-affected states in achieving inclusive negotiations and sustainable transitions out of war, crisis, or authoritarianism. From their role in the Colombia-FARC peace-building process to their ongoing efforts in Sudan, their impact is both inspiring and transformative.

Course Highlights

Our mini-course focused on equipping IFIT with tools to use generative AI in their peace-building initiatives. Specifically, we explored how to design surveys that can illuminate regional polarization dynamics in Sudan. Here’s what we covered:

  • Creating open-ended and multiple-choice survey questions to understand the effects of tribal and ethnic affiliations on polarization.
  • Using AI to iterate and refine survey questions for clarity and cultural sensitivity.
  • Tailoring surveys for different populations and translating them into local languages using AI tools.
  • Employing AI for A/B testing to optimize survey effectiveness.

A Team Effort

This course was a true team effort. A big thank you to Jesse Nava, our program manager, and Rafael Morales from UNAM. Their extensive experience creating surveys for marginalized communities and working with gang-affiliated networks added invaluable depth and expertise to the course.

Explore More

If you’re interested in learning more about how generative AI can support peace-building initiatives, we’ve made our slides available for further exploration. We hope they inspire new ways to leverage technology for positive change.

#AIforGood #PeaceBuilding #GenerativeAI #PromptEngineering

Thursday, December 12, 2024

How to Summon AI Magic with Python: A Fun Guide to Generative AI APIs

Hey there, tech explorers! Ever wanted to whip up some magical AI-generated text, like having robot Shakespeares at your fingertips? Well, today’s your lucky day! We’re here to break down a piece of Python code that lets you chat with a fancy AI model and generate text like pros. No PhDs required, we promise.

First Things First: The Toolbox

Before we can talk to the AI, we need to grab some tools. Think of it like prepping for a camping trip—you need a tent (the model) and some snacks (the tokenizer).

        
pip install transformers
        
    

This command installs the Transformers library, which is like the Swiss Army knife of AI text generation. It’s brought to you by Hugging Face (no, not the emoji—it’s a company!).

Step 1: Unlock the AI Vault

We’ll need to log in to Hugging Face to get access to their cool models. Think of it as showing your library card before borrowing books.

        
from huggingface_hub import login
login("YOUR HUGGING FACE LOGIN")
        
    

Replace "YOUR HUGGING FACE LOGIN" with your actual login token. It’s how we tell Hugging Face, "Hey, it’s us—let us in!"

Step 2: Meet the Model

Now we load the AI brain. In our case, we’re using Meta’s Llama 3.2, which sounds like a cool llama astronaut but is actually an advanced AI model.

        
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "meta-llama/Llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
        
    

- Tokenizer: This breaks down your input text into AI-readable gibberish. - Model: The big brain that generates the text.

Step 3: Give It Something to Work With

Now comes the fun part: asking the AI a question or giving it a task.

        
input_text = "Explain the concept of artificial intelligence in simple terms."
inputs = tokenizer(input_text, return_tensors="pt")
        
    

- input_text: This is your prompt—what you’re asking the AI to do. - tokenizer: It converts your input into numbers the model can understand.

Step 4: Let the Magic Happen

Here’s where the AI flexes its muscles and generates text based on your prompt.

        
outputs = model.generate(
    inputs["input_ids"].to("cuda"), 
    max_length=100, 
    num_return_sequences=1, 
    temperature=0.7, 
    top_p=0.9, 
)
        
    

- inputs["input_ids"].to("cuda"): Sends the work to your GPU if you’ve got one. - max_length: How long you want the AI’s response to be. - temperature: Controls creativity. - top_p: Controls how "risky" the word choices are.

Step 5: Ta-Da! Your Answer

Finally, we take the AI’s response and turn it back into human language.

        
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
        
    

skip_special_tokens=True tells the AI, "Please don’t include random weird symbols in your answer."

So, What’s Happening Under the Hood?

Here’s a quick analogy for how this works:

  • We give the AI a prompt (our input text).
  • The tokenizer translates our words into numbers.
  • The model (our AI brain) uses these numbers to predict the best possible next words.
  • It spits out a response, which the tokenizer translates back into words.

It’s like ordering a coffee at Starbucks: we place the order, the barista makes it, and voilà—our coffee is ready!

Why Should We Care?

Generative AI APIs like this are the backbone of chatbots, creative writing tools, and even marketing copy generators. Whether we’re developers, writers, or just curious, playing with this code is a great way to dip our toes into the AI ocean.

Ready to Try It?

Copy the code, tweak the prompt, and see what kind of magic we can summon. Who knows? We might create the next big AI-powered masterpiece—or at least have some fun along the way.

Now go forth and generate! 🎉