Constructing a Writing Assistant with LangChain and Qwen-2.5-32B

Within the period of Synthetic Intelligence, massive language fashions are the important thing to routinely creating content material, speaking with people, and fixing advanced issues well. Among the many robust fashions is Qwen 2.5 32B, a extremely highly effective AI mannequin with 32 billion parameters, developed by Alibaba’s DAMO Academy. It’s well-known for producing high-quality content material, reasoning successfully, and comprehending context. Qwen 2.5 32B is taking AI capabilities to new ranges. This text discusses how Qwen 2.5 32B and LangChain collaborate to rework AI purposes, their options, strengths, and the way they perform in actual life, and why they matter as a part of synthetic intelligence.

Studying Aims

  • Perceive the capabilities and purposes of Qwen 2.5 32B in AI-powered content material era for constructing a writing assistant.
  • Find out how LangChain integrates with Qwen 2.5 32B to reinforce AI-driven workflows for constructing a writing assistant.
  • Discover sensible implementations of Qwen 2.5 32B in rewriting, immediate era, and textual content simplification.
  • Arrange a Streamlit-based AI software utilizing LangChain and Qwen 2.5 32B.
  • Acquire insights into optimizing AI prompts for improved textual content readability and structured communication.

This text was revealed as part of the Information Science Blogathon.

What’s Qwen 2.5 32B?

Qwen 2.5 32B is a big language mannequin developed by Alibaba’s DAMO Academy. It’s a part of the Qwen sequence, identified for its highly effective pure language understanding and era capabilities. With 32 billion parameters, this mannequin is designed to deal with a variety of AI duties, together with:

  • Textual content era (artistic {and professional} writing)
  • Code era
  • Translation and summarization
  • Conversational AI
  • Superior reasoning and problem-solving

Qwen 2.5 32B is optimized for high-quality textual content era, making it an ideal alternative for purposes that require human-like fluency and context consciousness.

What’s LangChain?

LangChain is an AI framework that helps builders construct purposes utilizing massive language fashions (LLMs) like Qwen 2.5 32B. It gives instruments to:

  • Join LLMs with exterior knowledge sources
  • Handle multi-step reasoning and decision-making
  • Create AI-powered brokers that work together with customers dynamically
  • Construct chatbots, automation instruments, and AI-driven purposes

By combining LangChain with Qwen 2.5 32B, companies can construct superior AI purposes that rewrite sentences, generate prompts, simplify textual content, and enhance writing high quality.

Downside

Efficient communication is a crucial problem for people and companies alike. Poorly structured sentences, advanced jargon, and unclear prompts typically result in misunderstandings, inefficiencies, and low-quality AI-generated outputs. Whether or not it’s writing skilled emails, producing exact AI prompts, or simplifying technical content material, customers typically wrestle to precise their ideas in a transparent, structured, and impactful method.

Answer

This AI-powered app remedy this drawback by enhancing textual content readability, optimizing AI immediate era, and simplifying advanced content material:

  • Rewrite Sentence: Ensures grammatically appropriate, polished, {and professional} writing.
  • Picture and Video Immediate Generator: Creates well-structured prompts for correct AI-generated media.
  • Textual content Simplifier: Converts advanced paperwork into easy-to-understand language.

Move Diagram

The Textual content Enchancment App follows a streamlined workflow in Streamlit to reinforce consumer enter effectively. The method begins when the consumer selects the app and inputs textual content for enchancment. Upon clicking the method button, the system hundreds the ChatGroq LLM mannequin and determines the suitable processing logic based mostly on the chosen performance—whether or not rewriting sentences, producing picture and video prompts, or simplifying textual content. Every processing logic is executed accordingly, using LLMChain to generate refined outputs. Lastly, the improved textual content is displayed throughout the Streamlit interface, guaranteeing a seamless and user-friendly expertise.

flowchart

Constructing a Writing Assistant with Streamlit and LangChain

Under we are going to stroll by means of establishing an AI-powered textual content enchancment app utilizing Streamlit and LangChain. From surroundings setup to processing consumer inputs, observe these steps to construct an intuitive and environment friendly textual content enhancement instrument.

Step 1: Setting Setup

Create a digital surroundings utilizing python -m venv env and activate it based mostly in your working system (Home windows or macOS/Linux).

# Create a Setting
python -m venv env

# Activate it on Home windows
.envScriptsactivate

# Activate in MacOS/Linux
supply env/bin/activate

Step 2: Set up the Necessities.txt

Set up all required packages by working pip set up -r necessities.txt from the supplied GitHub hyperlink.

pip set up -r https://uncooked.githubusercontent.com/Gouravlohar/rewriter/refs/heads/principal/necessities.txt

Step 3: API Key Setup

Acquire an API key from Groq and retailer it within the .env file as API_KEY="Your API KEY PASTE HERE".

Go to Groq for API Key. 

groq

Paste the API key in .env File

API_KEY="Your API KEY PASTE HERE"

Step 4: Import Essential Libraries

Import important libraries resembling os, streamlit, PromptTemplate, LLMChain, and ChatGroq for AI-based textual content processing.

import os
import streamlit as st
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_groq import ChatGroq
from dotenv import load_dotenv

Step 5: Load the API Key

Load the API key from the .env file utilizing load_dotenv() and validate its existence earlier than continuing with the app execution.

load_dotenv()
groq_api_key = os.getenv("API_KEY")
if not groq_api_key:
    st.error("Groq API Key not present in .env file")
    st.cease()

We load the API key from a .env file and guarantee it’s accessible earlier than working the app

Step 6: Creating the Streamlit UI

Design the interface with a sidebar that enables customers to pick out from three functionalities: Rewrite Sentence, Picture & Video Immediate Generator, or Textual content Simplifier.

App Choice Sidebar

st.title("Textual content Enchancment App")
st.sidebar.header("Choose App")
st.sidebar.markdown("Select the performance you want to make use of:")

app_choice = st.sidebar.selectbox("Select an App", choices=[
    "Rewrite Sentence", 
    "Image and Video Prompt Generator", 
    "Text Simplifier"
])

Step 7: Defining AI Immediate Templates

Arrange structured prompts for various functionalities, together with tone changes, dialect variations, and artistic textual content transformation.

Rewrite Sentence Template

rewrite_template = """
Under is a draft textual content that will want enchancment.  
Your objective is to:  
- Edit the draft for readability and readability.  
- Modify the tone as specified.  
- Adapt the textual content to the requested dialect.  

**Tone Examples:**  
- **Formal:** "Greetings! Elon Musk has introduced a brand new innovation at Tesla, revolutionizing the electrical automobile trade. After intensive analysis and improvement, this breakthrough goals to reinforce sustainability and effectivity. We stay up for seeing its influence available on the market."  
- **Casual:** "Hey everybody! Enormous information—Elon Musk simply dropped a game-changing replace at Tesla! After a great deal of work behind the scenes, this new tech is about to make EVs even higher. Can’t wait to see the way it shakes issues up!"  

**Dialect Variations:**  
- **American English:** French fries, condominium, rubbish, cookie, parking zone  
- **British English:** Chips, flat, garbage, biscuit, automotive park  
- **Australian English:** Sizzling chips, unit, garbage, biscuit, automotive park  
- **Canadian English:** French fries, condominium, rubbish, cookie, parking zone  
- **Indian English:** Finger chips, flat, dustbin, biscuit, parking area  

Begin with a heat introduction if wanted.  

**Draft Textual content, Tone, and Dialect:**  
- **Draft:** {draft}  
- **Tone:** {tone}  
- **Dialect:** {dialect}  

**Your {dialect} Response:**  
"""

Picture & Video Immediate Generator Template

prompt_generator_template = """
Under is a sentence written in poor English:  
"{poor_sentence}"

Your job is to generate a artistic writing immediate that improves readability, grammar, and engagement.
"""

image_video_template = """
Under is a sentence:  
"{sentence}"

Your job is to generate an in depth and descriptive immediate optimized for text-to-image or text-to-video era.  
The immediate must be vivid and visually-oriented to assist generate high-quality media content material.
"""

Textual content Simplifier Template

text_simplifier_template = """
Under is a chunk of advanced textual content:  
"{complex_text}"

Your job is to rewrite this textual content in less complicated and clearer language whereas preserving its unique which means.
"""

 Step 8: Loading the AI Mannequin

Initialize the ChatGroq AI mannequin with Qwen-2.5-32B, enabling real-time textual content processing with streaming=True.

def load_LLM(groq_api_key):
    """Masses the ChatGroq mannequin for processing."""
    llm = ChatGroq(groq_api_key=groq_api_key, model_name="qwen-2.5-32b", streaming=True)
    return llm
  • This perform initializes the ChatGroq AI mannequin.
  • It makes use of Qwen-2.5-32B, a big language mannequin.
  • streaming=True allows real-time AI responses.

Step 9: Accumulating Consumer Enter

Based mostly on the chosen function, immediate customers to enter textual content, choose tone and dialect (for rewriting), or present descriptive inputs for picture/video era.

st.header(f"{app_choice}")
st.markdown("Present the required inputs under:")
with st.container():
    if app_choice == "Rewrite Sentence":
        draft = st.text_area("Draft Textual content", top=200, placeholder="Enter your textual content right here...")
        col1, col2 = st.columns(2)
        with col1:
            tone = st.selectbox("Choose desired tone", choices=["Formal", "Informal"])
        with col2:
            dialect = st.selectbox("Choose dialect", choices=[
                "American English", 
                "British English", 
                "Australian English", 
                "Canadian English", 
                "Indian English"
            ])
  • st.header(f”{app_choice}”) shows the chosen app identify dynamically.
  • st.container() teams associated UI components.
  • st.text_area() permits the consumer to enter textual content.
  • st.selectbox() lets customers select a tone (formal/casual) and dialect.

Step 10: Dealing with Inputs for Different Options

Dynamically modify the enter fields based mostly on the chosen performance, guaranteeing a user-friendly and adaptable interface.

elif app_choice == "Picture and Video Immediate Generator":
    sentence = st.text_area("Sentence", top=200, placeholder="Enter a sentence describing your required media...")
elif app_choice == "Textual content Simplifier":
    complex_text = st.text_area("Complicated Textual content", top=200, placeholder="Enter the advanced textual content right here...")
  • The app collects completely different inputs based mostly on the chosen performance.

Step 11: Processing the Consumer Enter

When the “Course of” button is clicked, load the AI mannequin, apply the related logic utilizing LLMChain, and show the refined output in Streamlit.

if st.button("Course of"):
    with st.spinner("Processing your textual content..."):
        llm = load_LLM(groq_api_key)
        if app_choice == "Rewrite Sentence":
            prompt_obj = PromptTemplate(input_variables=["tone", "dialect", "draft"], template=rewrite_template)
            chain = LLMChain(llm=llm, immediate=prompt_obj)
            consequence = chain.run(draft=draft, tone=tone, dialect=dialect)
        elif app_choice == "Picture and Video Immediate Generator":
            prompt_obj = PromptTemplate(input_variables=["sentence"], template=image_video_template)
            chain = LLMChain(llm=llm, immediate=prompt_obj)
            consequence = chain.run(sentence=sentence)
        elif app_choice == "Textual content Simplifier":
            prompt_obj = PromptTemplate(input_variables=["complex_text"], template=text_simplifier_template)
            chain = LLMChain(llm=llm, immediate=prompt_obj)
            consequence = chain.run(complex_text=complex_text)
    st.markdown("### Output:")
    st.markdown(consequence)
  • st.button(“Course of”): When clicked, begins textual content processing.
  • st.spinner(“Processing your textual content…”): Exhibits a loading indicator.
  • load_LLM(groq_api_key): Masses the AI mannequin.

Based mostly on the chosen function, it:

  • Chooses the suitable PromptTemplate.
  • Creates an LLMChain (LangChain’s approach to execute AI fashions).
  • Runs the AI mannequin with consumer enter.
  • Shows the ultimate consequence utilizing st.markdown(consequence).

Get Full Code on GitHub Right here.

Output

Rewrite Sentence
Image and Video Prompt Generator
Text Simplifier

Rewrite Sentence Enter

Yo, I’ve been grinding continuous and bringing the warmth, so I believe it’s time we discuss money. I hoped for a fatter paycheck—simply wanna be certain that my hustle and expertise ain’t going unnoticed. Suppose we will make this work?

Rewrite Sentence Output

Rewrite Sentence Output; LangChain and Qwen-2.5-32B

Picture and Video Immediate Generator Enter

A futuristic metropolis with flying automobiles and neon lights.

Picture and Video Immediate Generator Output

Image and Video Prompt Generator Output ; LangChain and Qwen-2.5-32B

Textual content Simplifier Enter

Lately, the exponential developments in synthetic intelligence and machine studying algorithms haven’t solely enhanced the effectivity of information processing and predictive analytics however have additionally launched unprecedented challenges in moral decision-making, knowledge privateness, and algorithmic bias, necessitating a multidisciplinary strategy that integrates computational sciences, authorized frameworks, and moral concerns to make sure the accountable deployment of AI-driven applied sciences throughout numerous sectors, together with healthcare, finance, and autonomous methods.

Textual content Simplifier Output

Text Simplifier Output; LangChain and Qwen-2.5-32B

Conclusion

The Textual content Enchancment App is a robust AI-driven instrument designed to refine textual content readability, creativity, and readability. Developed with Streamlit and LangChain, it affords options like sentence rewriting, AI-ready immediate era, and textual content simplification. Powered by Groq’s Qwen-2.5-32B mannequin, it ensures high-quality, real-time textual content conversion, making it an important instrument for professionals, college students, and content material creators. Future upgrades, together with voice command and multi-language help, will additional improve its position in constructing a writing assistant, making it much more versatile and environment friendly. With these developments, the app continues to push the boundaries of constructing a writing assistant for numerous consumer wants.

Key Takeaways

  • The app makes use of LangChain and Groq AI to refine and simplify textual content effectively.
  • Customers can rewrite sentences, generate media prompts, and simplify textual content, making it versatile for various wants.
  • The Rewrite Sentence function helps tone changes (formal/casual) and dialect customization.
  • Constructed with Streamlit, the app gives a easy and interactive expertise for seamless textual content processing.
  • Including multilingual help, voice enter, or further AI fashions can additional improve the app’s capabilities.

Steadily Requested Questions

Q1.  What’s the objective of the Textual content Enchancment App?

A. The app helps customers improve textual content readability, generate artistic prompts for media, and simplify advanced sentences utilizing AI.

Q2. How does the Rewrite Sentence function work?

A. It refines textual content by bettering grammar, readability, and tone. Customers also can choose a most well-liked dialect for localization.

Q3. Can the app generate prompts for AI-generated photos and movies?

A. Sure, the Picture and Video Immediate Generator converts easy sentences into detailed prompts optimized for AI-generated media.

This fall. Is the Textual content Simplifier function helpful for non-native English audio system?

A. Completely! It simplifies troublesome sentences whereas preserving which means, making content material extra accessible.

Q5. What AI mannequin does this app use?

A. The app is powered by Groq’s Qwen-2.5-32B mannequin, which gives high-quality textual content processing and content material era.

The media proven on this article just isn’t owned by Analytics Vidhya and is used on the Writer’s discretion.

Hello I am Gourav, a Information Science Fanatic with a medium basis in statistical evaluation, machine studying, and knowledge visualization. My journey into the world of information started with a curiosity to unravel insights from datasets.