Sunday, July 07, 2024

How to set specific version of dependency in poetry

I am here will set langchain-core==0.2.2 instead of 0.2.3 sent in toml file.

To set Poetry to use langchain-core==0.2.2, you can add it as a dependency in your pyproject.toml file. Here's how you can do it:

  1. Open your pyproject.toml file in your project directory.
  2. Locate the [tool.poetry.dependencies] section.
  3. Add the following line to specify the version of langchain-core you want to use: langchain-core = "==0.2.2"
  4. Save the pyproject.toml file.

After making this change, Poetry will use langchain-core==0.2.2 when you run poetry install or poetry update.

Note: Make sure you have Poetry installed on your system before running these commands. You can install Poetry by following the instructions on the official Poetry website.

Saturday, June 01, 2024

Common prevention techniques against injection attacks

With reference to my previous blog post. Here are few prevention techniques against injection attacks:

  1. Input Validation: Validate and sanitize all user input to ensure it meets expected formats and ranges. Avoid dynamic queries built using untrusted input.

  2. Use Parameterized Queries: Utilize parameterized queries with prepared statements or stored procedures to prevent the injection of malicious code.

  3. Escaping Input: Special characters in user input should be escaped to neutralize their harmful effects, making them harmless before use.

  4. Least Privilege Principle: Applications should operate with the least privilege necessary to limit the potential impact of a successful injection attack.

  5. Regular Software Patching: Keep all software components and frameworks up to date to patch known injection vulnerabilities.

  6. Web Application Firewalls (WAF): Implement WAF solutions to filter and block malicious input before it reaches the application.

  7. Code Reviews and Security Testing: Conduct regular code reviews, security audits, and penetration testing to identify and mitigate potential injection vulnerabilities.

  8. Secure Development Practices: Train developers in secure coding practices to minimize the introduction of injection vulnerabilities during application development.

  9. Secure Configuration: Follow best practices for server configuration and secure coding guidelines to reduce the attack surface for injection attacks.

By implementing a combination of these techniques and maintaining a proactive approach to web application security, organizations can significantly reduce the risk of falling victim to injection attacks. 

Friday, May 24, 2024

How to set verbose in Langchain

Here is how you can set globally. 

from langchain.globals import set_verbose, set_debug

set_debug(True)
set_verbose(True)
  

You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object).

# Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain).
agent = initialize_agent(
    tools, 
    llm, 
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)

agent.run("who invented electricity")
  

Hope this helps!!

Wednesday, May 22, 2024

OpenAI Unveils Revolutionary GPT-4o Model: Enhancing ChatGPT Capabilities

In a ground breaking move, OpenAI has unveiled its latest advancement in artificial intelligence: GPT-4o, the latest version of its language model, ChatGPT. This model promises to revolutionize user interactions, offering real-time spoken conversations, memory capabilities, and multilingual support.

In this blog post, we'll delve into the key features and capabilities of GPT-4o and explore how it's set to change the way we interact with technology.


Key Features of GPT-4o:

  1. Real-Time Reasoning: GPT-4o boasts real-time reasoning capabilities across text, audio, and vision inputs and outputs. This means it can process and generate responses in real-time, emulating human conversation.
  2. Speedy Response Times: GPT-4o is designed to provide lightning-fast response times, with response times as fast as 232 milliseconds for audio inputs. This means users can have smooth and natural conversations with the model, just like having a real-time conversation with a human
  3. Enhanced Vision and Audio Understanding: GPT-4o significantly enhances the model's ability to understand and process visual and audio inputs. This makes it more versatile and capable of handling a wide range of user interactions, from visual search queries to spoken conversations.
  4. Multilingual Support: GPT-4o is not limited to a single language. It can handle multiple languages seamlessly, allowing users to interact with the model in their preferred language. This expands the model's applicability and accessibility to a global audience.
  5. Memory Capabilities: GPT-4o is equipped with enhanced memory capabilities, allowing it to retain and contextualize information from previous interactions. This enables the model to understand and respond to complex and nuanced conversations, providing a more personalized and context-aware experience.
  6. Safety Features: GPT-4o comes with built-in safety features to mitigate potential risks and ensure user safety. These features include safeguards against inappropriate content, extensive testing to ensure accuracy and reliability, and mechanisms to handle edge cases and unexpected inputs.
  7. Free Access: OpenAI has made GPT-4o available for free to all users. This removes barriers to access and enables developers and individuals to leverage the model for a wide range of applications, from chatbots to language translation.
  8. Premium Options: OpenAI offers premium options for GPT-4o, allowing users to access higher capacity limits and additional features. These premium options provide access to more advanced capabilities, such as improved image recognition and natural language processing.
  9. API Integration: Developers can access GPT-4o through the OpenAI API. The API allows developers to integrate the model into their applications, enabling them to leverage its capabilities for various tasks, from chatbots to content generation.
  10. Future Expansions: OpenAI plans to incorporate audio and video capabilities into GPT-4o in the future. This expansion will enable the model to handle multimedia inputs and generate responses in real-time, further enhancing its capabilities.

Wednesday, May 15, 2024

AI announcements from Google I/O 2024

Google I/O was jam-packed with AI announcements. Here's a roundup of all the latest developments.

  1. Google is introducing "Ask Photos," a feature that allows Gemini to search your Google Photos library in response to your questions. Example: Gemini can identify a license plate number and provide an accompanying picture for confirmation.

  2. Google Lens now allows video-based searches. You can record a video, ask a question, and Google's AI will find relevant answers from the web.

  3. Google introduced Gemini 1.5 Flash, a new AI model optimized for fast responses in narrow, high-frequency, low-latency tasks.

  4. Google has enhanced Gemini 1.5 to improve its translation, reasoning, and coding capabilities. Additionally, the context window of Gemini 1.5 Pro has been doubled from 1 million to 2 million tokens.

  5. Google announced Project Astra, a multimodal AI assistant designed to be a do-everything AI agent. It will use your device's camera to understand surroundings, remember item locations, and perform tasks on your behalf.

  6. Google unveiled Veo, a new generative AI model rivaling OpenAI's Sora. Veo can generate 1080p videos from text, image, and video prompts, offering various styles like aerial shots or timelapses. It's available to some creators for YouTube videos and is being pitched to Hollywood for potential use in films.

  7. Google is launching Gems, a custom chatbot creator similar to OpenAI's GPTs. Users can instruct Gemini to specialize in various tasks. Example: It can be customized to help users learn Spanish by providing personalized language learning exercises and practice sessions. This feature will soon be available to Gemini Advanced subscribers.

  8. A new feature, Gemini Live, will enhance voice chats with Gemini by adding extra personality to the chatbot's voice and allowing users to interrupt it mid-sentence.

  9. Google is introducing "AI Overviews" in search. With this update, a specialized Gemini model will design and populate results pages with summarized answers from the web, similar to tools like Perplexity.

  10. Google is adding Gemini Nano, the lightweight version of its Gemini model, to Chrome on desktop. This built-in assistant will use on-device AI to help generate text for social media posts, product reviews, and more directly within Google Chrome.