Category Archives: AI

AI related blogs

A Model for Mental Health Using LLM

We are currently facing one of the largest mental health crises of this century. The pressures and challenges of living day to day exceed the current mental health resources that are available and accessible for most. Data has a substantial role within healthcare, especially when it comes to developing treatment plans for individuals facing mental health challenges. Technologists are looking at new ways to provide resources by securely utilizing this foundational data in this arena.

Large language models (LLMs) have made strides in the healthcare domain by leveraging their capacity to analyze text data effectively. By employing natural language processing (NLP) techniques, LLMs can sift through sources such as records, therapy transcripts, and social media content to unveil patterns and connections pertaining to mental health issues. Such capabilities enable them to uncover insights that traditional methods may have missed, thereby offering avenues for comprehending and predicting health outcomes. This type of data processing and retrieval may prompt individuals to reconsider what is shared on social media channels.

Nevertheless, it is crucial to recognize that LLMs have limitations, including biases in their training data and challenges in interpreting emotions accurately. There is a greater risk for underrepresented communities where limited resources are available; the training data will also be limited. Mental health is not a one-size-fits-all issue. By examining language patterns associated with conditions like anxiety, depression, and other mental disorders, LLMs can support healthcare professionals in identifying problems and providing interventions that could potentially enhance outcomes while also contributing to destigmatizing health matters.

Advanced Language Models (ALMs) have also revolutionized the industry by processing and generating text that mirrors human language patterns using extensive datasets. One area where ALMs have demonstrated promise is healthcare. Analyzing large amounts of text data allows Advanced Language Models (ALMs) to uncover patterns, predict outcomes, and offer insights that were previously hard to obtain. Let’s explore how ALMs are transforming our understanding of the healthcare sector:

Patient Data Review
ALMs can examine records, therapy notes, and patient communications to identify symptoms, track progress, and suggest treatments. This data interpretation capability helps mental health professionals better understand conditions.

Diagnosis
By analyzing interactions, social media posts, and other digital footprints, ALMs can spot signs of health issues. This timely detection can facilitate interventions. Enhance patient outcomes.

Personalized Treatment Plans
ALMs can help develop tailored treatment plans by analyzing data and comparing it with databases containing similar cases. This personalized approach holds promise for improving treatment effectiveness.

Combatting Stigma
Language Learning Models (LLMs) can help reduce the stigma surrounding seeking health support by providing assistance through chatbots and virtual assistants.

Company Insights
Let’s consider two companies that are trying to address this shortage with a technical solution. Woebot Health is utilizing LLMs for healthcare support. They have introduced an agent called Woebot that delivers therapy (CBT) through interactive chat sessions. Woebot uses language models to understand and communicate with users, facilitating conversations that help in managing symptoms of depression and anxiety.

Features

  • Live Chats. Engages with users in a timely manner, providing support and interventions.

  • Personalized Guidance. Offers advice and coping mechanisms through analyzing user interactions.
  • Accessibility. 24/7 availability model, assistance is always at hand.

Wysa
Has a healthcare app powered by AI featuring a coach who guides users through evidence-based practices. The Wysa chatbot allows users to express their emotions and receive guidance on handling stress, anxiety, depression, and other mental health issues. It tailors activities to suit the individual’s needs.

Features

  • AI Coaching. Leveraging advanced language models to create user profiles that guide them through cognitive behavioral therapy (CBT), mindfulness exercises, and other therapeutic activities.
  • Data Security: The app encrypts and securely stores data in AWS Cloud and MongoDB Atlas.

Utilizing large language models will allow these companies and others to analyze various sources such as records, scientific papers, and social media content datasets. This presents an opportunity for health research by uncovering correlations and treatment approaches that may pave the way for future advancements in mental health care. Let’s consider the following data points:

Predictive Analysis

Language models have the ability to predict health trends and outcomes using data, which helps in planning and allocating resources for healthcare services.

Understanding Natural Language

By understanding the language used during therapy sessions, language models can assess the effectiveness of methods and provide suggestions for improvement.

Ethical Considerations

Although language models hold potential in healthcare, there are challenges and ethical considerations that need to be considered. Safeguarding the privacy and security of health information is essential to maintain user trust and confidentiality. Addressing any biases in the training data used for language models is a step towards ensuring unbiased support in mental health contexts. Prioritizing fairness reassures audiences about considerations in health support. It’s important to validate and update advice and insights from language models regularly to uphold their accuracy and reliability. This validation process plays a role in building confidence in the reliability of health support. While language models can serve as tools, they should complement rather than replace mental health professionals, enhancing human judgment instead of replacing it.

Future Directions

Incorporating tools based on language models into existing healthcare systems can streamline decision-making processes, with AI serving as an element within treatment procedures. Advancements in LLM technology could bring about health support tailored to individual backgrounds, preferences, and responses to previous treatments. By analyzing text and data sources like speech patterns and facial expressions, a comprehensive picture of a person’s well-being can be obtained.

Companies such as Woebot Health and Wysa are at the forefront of using AI to provide customized health guidance. This underscores a sense of duty and dedication towards their use to ensure that these technologies can fulfill their potential. Despite facing obstacles, the potential benefits of LLMs in understanding and treating health conditions are significant. As advancements continue, we can look forward to solutions that enhance our grasp on supporting mental well-being.

Ethical considerations surrounding LLMs in healthcare are crucial. Prioritizing data privacy and addressing any present biases is essential. The responsible use of AI-generated insights is key to implementing these technologies. Despite challenges, the profound transformative impact of LLMs on healthcare opens up avenues for diagnosing, treating, and supporting individuals dealing with health issues in the future.

What’s Cooking in the AI Test Kitchen – RAG

Cooking is one of my favorite hobbies, so I took the opportunity to combine my interest in technology and cooking to discuss this innovative chapter on Generative AI, RAG. First, let’s get the basics out of the way.

What is Generative AI? How is it different from the AI?

Artificial Intelligence (AI)

AI refers to computer systems performing tasks that typically require intelligence. It includes machine learning (ML) and deep learning (DL) methods. AI systems learn from data, identify patterns, and make decisions autonomously. Examples of AI applications include speech recognition and self-driving cars. While AI can simulate thinking processes, it does not have human consciousness.

Generative AI

Generative AI is a subset of AI that focuses on generating content. In contrast to AI, which leads by rules, generative AI employs self-learning models to produce innovative outputs. Examples of AI encompass text generation models like GPT 4 and image creation models like DALL E. This branch of AI merges creativity with innovation, empowering machines to create art, music, and literature. Nonetheless, it encounters challenges related to considerations, bias mitigation, and control over the generated content.

Generally speaking, AI techniques encompass applications, while generative AI stands out for its emphasis on creativity and original content creation. 

Now, back to our test kitchen. Let’s put on our virtual aprons to travel on a flavor-filled journey to understand how RAG (Retrieval-Augmented Generation) works in the realm of Generative AI. Imagine we’re in a busy kitchen, aprons on, ready to cook some insights. Let’s use a basic roasted chicken recipe for this analogy.

The Recipe

Ingredients:

  • Chicken is our base ingredient, representing the raw text or prompt. I recommend cleaned chicken (unbiased).
  • Seasonings: These are the retrieval documents, like a well-stocked spice rack. Each seasoning (document) adds depth and context to our chicken (prompt).

Preparation:

  • Marinating the Chicken: We start by marinating our chicken with the prompt. This is where RAG comes into play. It retrieves relevant documents (seasonings) from its vast knowledge base (like a library pantry).
  • Selecting the Right Spices: RAG carefully selects the most relevant documents (spices) based on the prompt. These could be scientific papers, blog posts, or historical texts. This is my favorite part.

Cooking Process:

  • Simmering and flavor injecting: Just as we simmer our chicken with spices, RAG injects the prompt with context from the retrieved documents. It absorbs the flavors of knowledge, understanding nuances, and connections.
  • Balancing Flavors: RAG balances the richness of retrieved information. Too much spice (document) overwhelms the dish (response), while too little leaves it bland.

Generative Magic:

  • The Cooking Alchemy: Now, the magic happens. RAG combines the marinated prompt with the seasoned context. It’s like a chef creating a new recipe, drawing inspiration from old cookbooks of classic dishes.
  • Creating the Dish: RAG generates an informed and creative response. It’s not just recycling facts; it’s crafting a unique flavor profile.

Serving the Dish:

  • Plating and Garnishing: Our dish is ready! RAG delivers a rich, layered, and tailored response to the prompt—like presenting a beautifully plated meal.
  • Bon Appétit!: The user enjoys the response, savoring the blend of retrieval and generation. Just as a well-seasoned chicken satisfies the palate, RAG satisfies the hunger for knowledge and creativity.

RAG reminds me of a beautiful meal that can satisfy the desires of the most discerning taste. A traveled chef who searches for the best ingredients from around the globe and retrieves them to generate tasteful dishes. So, next time you encounter RAG, think of yourself as a chef creating delightful technology-based feasts.

A curated list of retrieval-augmented generation (RAG) in large language models.

Roasted Garlic Chicken

It’s a Family Affair with Claude 3

Anthropic announced the Claude 3 model family last month, which sets new industry benchmarks across various cognitive tasks. I am always excited to see what comes from Anthropic, so I was eager to see this group arrive.

The family includes three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Each successive model offers increasingly powerful performance, allowing users to select the optimal balance of intelligence, speed, and cost for their specific application.

Opus, Haiku, and Sonnet are now available in claude.ai, and the Claude API is generally available in 159 countries. All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.

Let’s take a look at each member of the Claude 3 family:

Opus

Opus is considered the most intelligent model. It outperforms its peers on most of the standard evaluation benchmarks for AI systems, including undergraduate-level expert knowledge (MMLU), graduate-level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human comprehension and fluency levels on complex tasks, leading the frontier of general intelligence. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding. Opus shows us the outer limits of what’s possible with generative AI.

Haiku

Claude 3 Haiku is the fastest, most compact model for near-instant responsiveness. With state-of-the-art vision capabilities, it caters to various enterprise applications, excelling in analyzing large volumes of documents. Its affordability, security features, and availability on platforms like Amazon Bedrock and Google Cloud Vertex AI make it transformative for developers and users alike.

Sonnet

Sonnet balances intelligence, speed, and cost, making it well-suited for various applications. Notably, it is approximately twice as fast as its predecessor, Claude 2.1. Sonnet excels in tasks requiring rapid responses, such as knowledge retrieval and sales automation. Additionally, it demonstrates a unique understanding of requests and is significantly less likely to refuse answers that push system boundaries. With sophisticated vision capabilities, including the ability to process visual formats like photos, charts, and technical diagrams, Claude 3 Sonnet represents a significant advancement in AI language models.

Let’s Talk Capabilities

Near-instant results

The Claude 3 models can power live customer chats, auto-completions, and data extraction tasks where responses must be immediate and real-time.

Haiku is the fastest and most cost-effective model in its intelligence category. It can read an information- and data-dense research paper on arXiv (~10k tokens) with charts and graphs in less than three seconds. Following its launch, Anthropic is expected to improve performance even further.

For the vast majority of workloads, Sonnet is 2x faster than Claude 2 and Claude 2.1 and has higher levels of intelligence. It excels at tasks demanding rapid responses, like knowledge retrieval or sales automation. Opus delivers similar speeds to Claude 2 and 2.1 but with much higher levels of intelligence.

Strong vision capabilities

The Claude 3 models have sophisticated vision capabilities that are on par with other leading models. They can process various visual formats, including photos, charts, graphs, and technical diagrams. Anthropic is providing this new modality to enterprise customers, some of whom have up to 50% of their knowledge bases encoded in PDFs, flowcharts, or presentation slides.

Fewer refusals

Previous Claude models often made unnecessary refusals that suggested a need for more contextual understanding. Anthropic has made substantial progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models. The Claude 3 models show a more nuanced understanding of requests, recognize actual harm, and refuse to answer harmless prompts much less often.

Improved accuracy

Businesses of all sizes rely on models to serve their customers, making it imperative for model outputs to maintain high accuracy at scale. To assess this, Anthropic uses many complex, factual questions that target known weaknesses in current models. Anthropic categorizes the responses into correct answers, incorrect answers (or hallucinations), and admissions of uncertainty, where the model says it doesn’t know the answer instead of providing inaccurate information. Compared to Claude 2.1, Opus demonstrates a twofold improvement in accuracy (or correct answers) on these challenging open-ended questions while exhibiting reduced incorrect answers.

In addition to producing more trustworthy responses, Anthropic will soon enable citations in their Claude 3 models so they can point to precise sentences in reference material to verify their answers. This is a plus for any AI tool.

Extended context and near-perfect recall

The Claude 3 family of models initially offered a 200K context window upon launch. However, all three models can accept inputs exceeding 1 million tokens and may make this available to select customers who need enhanced processing power.

To process long context prompts effectively, models require robust recall capabilities. The ‘Needle In A Haystack’ (NIAH) evaluation measures a model’s ability to recall information from a vast corpus of data accurately. Anthropic enhanced the robustness of this benchmark by using one of 30 random needle/question pairs per prompt and testing on a diverse crowdsourced corpus of documents. Claude 3 Opus not only achieved near-perfect recall, surpassing 99% accuracy, but in some cases, it even identified the limitations of the evaluation itself by recognizing that the “needle” sentence appeared to be artificially inserted into the original text by a human.

Responsible design

Anthropologists developed the Claude 3 family of models to be as trustworthy as they are capable. They have several dedicated teams that track and mitigate various risks, ranging from misinformation and CSAM to biological misuse, election interference, and autonomous replication skills. These efforts are much appreciated in a space where misinformation is often overlooked. Anthropologists continue to develop methods such as constitutional AI that improve the safety and transparency of their models, and they have tuned their models to mitigate privacy issues that could be raised by new modalities.

Addressing biases in increasingly sophisticated models is an ongoing effort, and Anthropic has made strides with this new release. They remain committed to advancing techniques that reduce biases and promote greater neutrality in their models.

Easier to use

The Claude 3 models are better at following complex, multi-step instructions. They are particularly adept at adhering to brand voice and response guidelines and developing customer-facing experiences. This is a plus for UX developers. In addition, the Claude 3 models are better at producing popular structured output in formats like JSON, making it more straightforward to instruct Claude on use cases like natural language classification and sentiment analysis.

Claude 3

Now that you’ve been introduced to the Claude 3 model family, the next question is, where do you begin to explore? Haiku, Sonnet, Opus—there isn’t a wrong choice with Claude 3. Each is like a polished gem with different characteristics, intelligence, speed, and versatility. I envision long hours pondering documentation and building with each one of them.

I’m looking forward to the upcoming feature, citations. It’s like adding footnotes to the grand library of AI. Imagine these models pointing to precise sentences in reference material, like scholars citing ancient scrolls. Seriously, I can’t wait for this feature to come out! Claude 3 creates trust and transparency, a solid foundation for AI innovations. The Claude family is a welcome addition to this space. I looked forward to the next chapter with Anthropic. 

Claude 3

Google Cloud and Claude 3

Simplifying SQL Generation with Vanna: An Open-Source Python RAG Framework


Handling databases often involves crafting complex SQL queries, which can be daunting for those who aren’t SQL experts. The need for a user-friendly solution to streamline SQL generation has led to the development of Vanna, an open-source Python framework.

The Challenge


Crafting complex SQL queries can be time-consuming and requires a deep understanding of the database structure. Existing methods might assist but often lack adaptability to various databases or compromise privacy and security.

Introducing Vanna


Vanna uses a Retrieval-Augmented Generation (RAG) model to take a unique two-step approach.

How it Works – In Two Steps (clicks)

First, users train the model on their data, and then they can pose questions to obtain SQL queries tailored to their specific database.

Key Features


Simplicity and Versatility: Vanna stands out for its simplicity and adaptability. Users can train the model using Data Definition Language (DDL) statements, documentation, or existing SQL queries, allowing for a customized and user-friendly training process.

Direct Execution:

Vanna processes user queries and returns SQL queries that are ready to be executed on the database. This eliminates the need for intricate manual query construction, providing a more accessible way to interact with databases.

High Accuracy

Vanna excels in accuracy, particularly on complex datasets. Its adaptability to different databases and portability across Language Model Models (LLMs) make it a cost-effective and future-proof solution.

Security Measures

Operating securely, Vanna ensures that database contents stay within the local environment, prioritizing privacy.

Continuous Improvement


Vanna supports a self-learning mechanism. In Jupyter Notebooks, it can be set to “auto-train” based on successfully executed queries. Other interfaces prompt users for feedback and store correct question-to-SQL pairs for continual improvement and enhanced accuracy.

Flexible Front-End Experience


Whether working in Jupyter Notebooks or extending functionality to end-users through platforms like Slackbot, web apps, or Streamlit apps, Vanna provides a flexible and user-friendly front-end experience.

Vanna addresses the common pain point of SQL query generation by offering a straightforward and adaptable solution. Its metrics underscore its accuracy and efficiency, making it a valuable tool for working with databases, regardless of SQL expertise. With Vanna, querying databases becomes more accessible and user-friendly.

As an Engineer who loves working with data, I am looking forward to trying Vanna to level up my SQL development.

GitHub

Documentation

Image Credit: Vanna AI

Learning Through Reverse Engineering: Guided by AI

Learning is an evolving journey in the changing realm of technology. While traditional education holds significance, exploring approaches can deepen our understanding of technologies. One effective method is engineering, which not only unveils the inner workings of a product but also lays the groundwork for expanding our skills. Combining engineering and artificial intelligence (AI) can revolutionize your learning experience.

The Basis of Learning

Similar to mathematics, technology relies on a knowledge base. Just as mathematical principles build upon each other, technology requires skills that can be honed and broadened. To venture into mastering technologies, it’s crucial to acknowledge that emerging products or services often come with a model and comprehensive documentation – providing a valuable starting point for our exploration.

The Process of Reverse Engineering

At the core of engineering lies dissecting an existing product or service to grasp its mechanisms. For instance, when presented with a model like a software application or service, the initial step involves replicating it and running the code as is. However, the real magic unfolds when deliberate alterations are made.

Introducing changes and observing their effects gives you insights into how the system operates.
I’ve named this technique the Two Clicks Rule, which helps you grasp the cause-and-effect dynamics in technology. If something goes left during your learning process, you’re a step away from reverting to the original working model. I’ll delve deeper into this concept in a future blog post.

Documenting Progress

An essential aspect of engineering is documentation. Document your experiences – note the modifications made, errors encountered, and solutions applied. This record guides your learning journey, allowing you to retrace your steps if needed and serving as a record of your knowledge.

Discoveries with AI

As technology progresses, so does the involvement of AI in enriching the learning process. Integrating AI tools into engineering can expedite the understanding of systems. AI can detect patterns, recognize dependencies, and propose enhancements or optimizations. The collaboration between intelligence and artificial intelligence can enhance the effectiveness of engineering as a learning approach.

Reverse engineering isn’t about breaking things but digging into layers and grasping underlying complexities. By harnessing AI alongside this method, you can accelerate your learning journey. Gain insights into emerging technologies.

When you come across a product, consider experimenting, analyzing, and embracing AI as your tech buddy in exploring knowledge.

Rest and Repeat AI for 2024

The winter season brings a sense of slowing down, if only for a moment. However, this doesn’t apply to AI. AI continues to captivate us with its capabilities and the potential for collaboration. With each innovation, I’ve contemplated the challenges and opportunities it presents to humans. After researching, I’ve gathered a few highlights from 2023 that are still relevant today.

Open-source AI development reshaped the landscape of AI frameworks and models. The introduction of PyTorch 2.0 did not establish an industry standard. Also equipped researchers and developers with powerful tools. Ongoing enhancements to Nvidias Modulus and Colossal AIs’ PyTorch-based framework have further enriched the open-source ecosystem, fostering innovation.

AI models have revolutionized content generation. Redefined natural language processing as we know it. OpenAIs GPT 4 a language model at the forefront of this transformation has pushed the boundaries of AI capabilities. GPT 4 showcases its proficiency in text-based writing, coding, and complex problem-solving applications. Additionally, Jina AI’s 8K Text Embedding Model and Mistral AI’s Mistral 7B exemplify the growing expertise within the AI community regarding handling amounts of data.

There is a trend towards increased collaboration between AI systems and humans to achieve desired outcomes. This highlights the importance of utilizing AI to enhance capabilities and improve efficiency and effectiveness.

AI has made progress by introducing code-based and no-code solutions. These advancements have made AI more accessible to individuals without expertise, promoting inclusivity and diversity within the AI community. There is still work to be done in this space.

Cybersecurity solutions leveraging AI technology have been developed to address the growing threat of cyberattacks. These solutions provide defense mechanisms against evolving cyber threats, bolstering security measures. This topic is on my shortlist. Stay tuned.

Digital twinning has become a tool for simulating real-world situations and improving processes. It allows businesses and industries to create replicas that assist decision-making and boost efficiency. This technology leverages machine learning algorithms to analyze sensor data and identify patterns. Through artificial intelligence and machine learning (AI/ML), organizations gain insights to enhance performance, streamline maintenance operations, measure emissions, and improve efficiency.

AI-driven personalization has gained traction across domains. Systems can now tailor products and services to users, offering a customized experience that aligns with their preferences. Customizations are always a plus in my book. This personalized approach has significantly improved user experiences in e-commerce and entertainment domains.
The use of AI in voice technology has constantly changed, leading to the development of voice assistants. This advancement has improved voice recognition, language comprehension, and interaction between users and AI-driven voice systems. I have taken advantage of this by scripting my presentation when I lost my voice and had it delivered through AI.

AI has also made progress within the healthcare industry. It is now utilized for disease diagnosis and treatment development. Integrating AI into healthcare showcases its potential to transform care and medical research. The bias in medical data is a real concern, so how this data is used may put certain communities at risk. This is a space I am following very closely.

AI continues to challenge the boundaries of creatives, but the community is strong. It will be interesting to see how AI will begin acknowledging and accepting that creatives are here to stay. Creatives are also acknowledging the same for AI.

In the coming years, expect a rise in similar services and products. Instead of viewing this repetition as a drawback, it should be embraced as an advantage. The increasing array of AI options indicates a dynamic ecosystem that provides opportunities and choices for developers, businesses, and users. This wealth of options fosters competition fuel innovation and empowers individuals to customize AI solutions according to their requirements. As the AI landscape continues to evolve, the presence of repeated services and products validates the growth of this field. It offers us endless possibilities that contribute significantly to the evolution and accessibility of artificial intelligence.

I appreciate you reading my blog and look forward to sharing more in this space.

PyGraft – A Python-Based AI Tool for Generating Knowledge Graphs

Visualization

Telling a story with data is an effective way to share information. I’ve spent years developing data stories with standard reporting tools as the only option. Visualizing information has become incredibly important today, especially when representing and analyzing relationships between entities and concepts. Knowledge graphs (KGs) capture these relationships through triples (s, p, o) where s represents the subject, ‘o’ represents the object, and ‘p’ defines their relationship. KGs are often accompanied by schemas or ontologies defining the data’s structure and meaning. They have proven valuable in recommendation systems and natural language understanding.

Challenges Associated with Mainstream KGs

While knowledge graphs are tools, there are limitations when relying solely on mainstream knowledge graphs for evaluating AI models. These mainstream KGs often share properties that can lead to evaluations, particularly in tasks like node categorization. Additionally, some datasets used for link prediction may contain biases and inference patterns that can result in assessments of models.

Furthermore, in fields like education, law enforcement, and healthcare, where data privacy is a concern, publicly accessible knowledge graphs are not always readily available. Researchers and practitioners in these domains often have requirements for their knowledge graphs. It is crucial to create graphs replicating real-world graphs’ characteristics.
To tackle these challenges, a group of researchers from Université de Lorraine and Université Côte d’Azur has developed PyGraft, a Python-based AI tool that’s the source. PyGraft aims to generate customized schemas and knowledge graphs that apply to domains.

Contributions to this space:

  • Pipeline for Knowledge Graph Generation;
    PyGraft introduces a pipeline for generating schemas and knowledge graphs, allowing researchers and practitioners to customize the generated resources according to their requirements. This ensures flexibility and adaptability.
  • Domain Neutral Resources;
    One remarkable feature of PyGraft is its ability to create domain schemas and knowledge graphs. This means the generated resources can be used for benchmarking and experimentation across fields and applications. It eliminates the necessity for domain KGs, making it an invaluable tool for domain research.
  • Expanded Range of RDFS and OWL Elements;
    PyGraft uses RDF Schema (RDFS) and Web Ontology Language (OWL) elements to construct knowledge graphs with semantics. This technology allows resource descriptions while adhering to accepted standards of the Semantic Web.
  • Ensuring Logical Coherence through DL Reasoning;
  • The tool uses a reasoning system based on Description Logic (DL) to ensure that the resulting schemas and knowledge graphs are coherent. This process guarantees that the generated knowledge graphs follow the principles of ontology.

Accessibility in a Tool

PyGraft is an open-source project with available code and documentation. It also includes examples to make it user-friendly for beginners and experienced users.

PyGraft is a Python library that researchers and practitioners can use to generate schemas and knowledge graphs (KGs) according to their requirements. It enables the creation of schemas and KGs, on demand with knowledge of the desired specifications. The resources generated are not tied to any application field, making PyGraft a valuable tool for data-limited research domains.

Features:

  • It can generate schemas, knowledge graphs, or both.
  • The generation process is highly customizable through user-defined parameters.
  • Schemas and KGs are constructed using a range of RDFS and OWL constructs.
  • Logical consistency is guaranteed by employing a DL reasoner called HermiT.
  • A generator of synthesizing both schemas and knowledge graphs using a single pipeline.
  • Creates generated schemas and KGs(Knowledge Graphs) with a set of RDFS(Resource Description Framework Schema) and OWL (W3C Web Ontology Language) constructs, ensuring compliance with used Semantic Web standards.


PyGraft is an advancement in the field of knowledge graph generation. It overcomes the limitations of mainstream KGs by offering a customizable solution for researchers, practitioners, and engineers.

PyGraft enables users to create KGs that accurately reflect real-world data by adopting a domain approach and adhering to Semantic Web standards.

Pygraft bridges the gap between data privacy and the need for high-quality knowledge graphs.

The beauty of this open-source tool, is that it encourages collaboration and innovation within the AI and Semantic Web communities, opening up possibilities for knowledge representation and reasoning. This type of technical collaboration is priceless.

Resources:

https://pygraft.readthedocs.io/en/latest/

https://github.com/nicolas-hbt/pygraft

Claude 2.1 Lets Go!

One of the things I appreciate and respect about Anthropic, the creators of Claude, is the transparency of their messaging and content. The content is easy to understand, and that’s a plus in this space. Whenever I visit their site, I have a clear picture of where they are and the plans for moving forward. OpenAI’s recent shenanigans have piqued my curiosity to revisit other chatbot tools. Over a month ago, I wrote a comparative discussion about a few AI tools. One of the tools I discussed was Claude 2.0. Now that Claude 2.1 has been released, I wanted to share a few highlights based on my research. Note most of these features are by invitation only (API Console)or fee-based (Pro Access only) and are not generally available now in the free tier. There is a robust documentation library for Claude to review.

The Basics

  • Claude 2.1 is a chatbot tool developed by Anthropic. The company builds large language models (LLM) as a cornerstone of its development initiatives and its flagship chatbot, Claude.
  • Claude 2.1 manages the API console in Anthropics’s latest release. This AI machine powers the claude.ai chat experience.
  • In the previous version, Claude 2.0 could handle 100,000 tokens that translated to inputs of around 75,000 words.
  • A token is a unit measurement of text AI models use to represent and process natural language. The unit can be code, text, or characters, depending on the method of tokenization used. The unit of text is assigned a numeric value fed into the model.
  • Claude 2.1 delivers an industry-leading 200K token context window, translating to around 150,000 words, or about 500 pages.
  • A significant reduction in rates of model hallucination and system prompts in version 2.1 means more consistent and accurate responses.

200k Tokens Oh My!

Why the increase in the number of tokens? Anthropic is listening to their growing community of users. Based on use cases, Claude was used for application development and analyzing complex plans and documents. Users wanted more tokens to review large data sets. Claude aims to produce more accurate outputs when working with larger data sets and longer documents.

With this increase in tokens, users can now upload technical documentation like entire codebases, technical documentation, or financial reports. By analyzing detailed content or data, Claude can summarize, conduct Q&A, forecast trends, spot variations across several revisions of the same content, and more.

Processing large datasets and leveraging the benefits of AI by pushing the limit up to 200,000 tokens is a complex feat and an industry first. Although AI cannot replace humans altogether, it can allow humans to use time more efficiently. Tasks typically requiring hours of human effort to complete may take Claude a few minutes. Latency should decrease substantially as this type of technology progresses.

Decrease in Hallucination Rates

Although I am interested in the hallucination aspects of AI, for most this is not ideal in business. Claude 2.1 has also made significant gains in credibility, with a decrease in false statements compared to the previous Claude 2.0 model. Companies can build high-performing AI applications that solve concrete business problems and deploy AI with the goal of greater trust and reliability.

Claude 2.1 has also made meaningful improvements in comprehension and summarization, particularly for long, complex documents that demand high accuracy, such as legal documents, financial reports, and technical specifications. Use cases have shown that Claude 2.1 demonstrated more than a 25% reduction in incorrect answers and a 2x or lower rate of mistakenly concluding a document supports a particular claim. Claude continues to focus on enhancing their outputs’ precision and dependability.

API Tool Use

I am excited to hear about the beta feature that allows Claude to integrate with users’ existing processes, products, and APIs. This expanded interoperability aims to make Claude more useful. Claude can now orchestrate across developer-defined functions or APIs, search over web sources, and retrieve information from private knowledge bases. Users can define a set of tools for Claude and specify a request. The model will then decide which device is required to achieve the task and execute an action on its behalf.

The Console

New consoles can often be overwhelming, but Claude made the commendable choice to simplify their developer Console experience for Claude API users while making it easier to test new prompts for faster learning. The new Workbench product will enable developers to iterate on prompts in a playground-style experience and access new model settings to optimize Claude’s behavior. The user can create multiple prompts and navigate between them for different projects, and revisions are saved as they go to retain historical context. Developers can also generate code snippets to use their prompts directly in one of our SDKs. Access to the console is by invitation only based on when this content was published.

Anthropic will empower developers by adding system prompts, allowing users to provide custom instructions to Claude to improve performance. System prompts set helpful context that enhances Claude’s ability to assume specified personalities and roles or structure responses in a more customizable, consistent way that aligns with user needs.

Claude 2.1 is available in their API and powers the chat interface at claude.ai for both the free and Pro tiers. This advantage is for those who want to test drive before committing to Pro. Usage of the 200K token context window is reserved for Claude Pro users, who can now upload larger files.

Overall, I am happy to see these improvements with Claude 2.1. I like having choices in this space and more opportunities to learn about LLM in AI as a technology person interested in large data sets. Claude is on my shortlist.

Originally published at https://mstechdiva.com on November 23, 2023.

Open Source AI Gets the Bird

Open source creates opportunities for developers worldwide to work together on projects, share knowledge and collectively enhance software solutions. This inclusive approach not speeds up advancements but also ensures that cutting edge tools and technologies are available to everyone. So it always warms my heart when I see any innovations in this space.

Open source software drives innovation by reducing development costs and ensuring transparency and security. To me it embodies the essence of intelligence, by bringing developers together to learn from each other and shape the future of technology as a united community.

The artificial intelligence community has reached a significant milestone with the introduction of Falcon 180B, an open-source large language model (LLM) that boasts an astonishing 180 billion parameters, trained on an unprecedented volume of data. This groundbreaking release, announced by the Hugging Face AI community in a recent blog post, has already profoundly impacted the field. Falcon 180B builds upon the success of its predecessors in the Falcon series, introducing innovations such as multi-query attention to achieve its impressive scale, trained on a staggering 3.5 trillion tokens, representing the longest single-epoch pretraining for any open-source model to date.

Scaling Unleashed

Achieving this goal was no small endeavor. Falcon 180B required the coordinated power of 4,096 GPUs working simultaneously for approximately 7 million GPU hours, with the training and refinement process orchestrated through Amazon SageMaker. Considering this regarding the size of the LLM, the model’s parameters measure 2.5 times larger than Meta’s LLaMA 2, which had previously been considered the most capable open-source LLM with 70 billion parameters trained on 2 trillion tokens. The numbers and data involved are staggering, its like an analyst dream.

Performance Breakthrough

Falcon 180B isn’t just about scale; it excels in benchmark performance across various natural language processing (NLP) tasks. On the leaderboard for open-access models, it impressively scores 68.74 points, coming close to commercial giants like Google’s PaLM-2 on the HellaSwag benchmark. It matches or exceeds PaLM-2 Medium on commonly used benchmarks like HellaSwag, LAMBADA, WebQuestions, Winogrande, and more and performs on par with Google’s PaLM-2 Large. This level of performance is a testament to the capabilities of open-source models, even when compared to industry giants.

Comparing with ChatGPT

When measured against ChatGPT, Falcon 180B sits comfortably between GPT 3.5 and GPT4, depending on the evaluation benchmark. While it may not surpass the capabilities of the paid “plus” version of ChatGPT, it certainly gives the free version a run. I am always happy to see this type of healthy competition in this space.

The Huggingface community is strong so there is potential for further fine-tuning by the community, which is expected to yield even more impressive results. Falcon 180 B’s open release marks a significant step forward in the rapid evolution of large language models, showcasing advanced natural language processing capabilities right from the outset.

A New Chapter in Efficiency

Beyond its sheer scale, Falcon 180B embodies the progress in training large AI models more efficiently. Techniques such as LoRAs, weight randomization, and Nvidia’s Perfusion have played pivotal roles in achieving this efficiency, heralding a new era in AI model development.

With Falcon 180B now freely available on Hugging Face, the AI research community eagerly anticipates further enhancements and refinements. This release marks a huge advancement for open-source AI, setting the stage for exciting developments and breakthroughs. Falcon 180B has already demonstrated its potential to redefine the boundaries of what’s possible in the world of artificial intelligence, and its journey is just beginning. It’s the numbers for me. I am always happy to see this growth in this space. Yes, “the bird” was always about technology. Shared references give you a great headstart in understanding all about Falcon.

References:

huggingface on GitHub

huggingface Falcon documentation

Falcon Models from Technlogy Innovation Institute

Human Engineering in AI

Engineers are tasked with comprehending the layers of artificial intelligence (AI), including its strengths and limitations. Engineering plays a role in the development of AI as it is indispensable in harnessing its power. However, it’s essential to acknowledge and respect AI’s boundaries. Let’s consider some of AI’s characteristics, capabilities, and limitations that we are aware of today.

The effectiveness of AI models heavily relies on the data they are trained on. This reliance on training data can introduce biases or limitations in that data itself. Engineers need to be aware of these biases and actively work towards addressing them through representative training data—a practice commonly referred to as Responsible AI.

It is crucial to remember that AI models lack emotions, intentions, or subjective experiences as humans do. Their operations are based on algorithms and logical rules, requiring engineers’ understanding and firsthand knowledge. Therefore caution should be exercised when interpreting AI-generated content since bias can inadvertently seep into its output.

Despite its capabilities, an AI model cannot truly engage in cognition or attain consciousness comparable to human beings. It has the capability to process and analyze data generate responses, and imitate human behavior. However, it operates based on predefined algorithms and statistical patterns than possessing human qualities. I want to emphasize that it is not a being. The AI model lacks experiences, emotions, and the ability to be conscious like humans. Instead, its functionality relies on processes rather than humans’ complex cognitive abilities.

No matter how large or intricate the AI model is, it may be unable to have conversations or engage in self-reflection. While it can process input and generate responses accordingly, its system has no mechanism for introspection or self-awareness. Its primary focus is interacting with users or external systems by utilizing its knowledge and adaptive methods to provide insights and responses. Let’s consider the importance and necessity for Humans in AI.

The roles of Humans in the field of AI:

  1. Data Collection and Annotation; The process of training AI systems heavily relies on amounts of data. Humans are instrumental in collecting, cleaning, and annotating this data to ensure its quality and relevance. They meticulously label the data verify its accuracy and strive to create representative datasets for training AI models.
  2. Model Training and Tuning; Developing AI models requires making decisions regarding architecture design, selecting hyperparameters, and training the models using suitable datasets. Human expertise is indispensable in making these decisions. Their intuition and domain knowledge contribute significantly to tuning models for tasks.
  3. Ethical and Moral Considerations; Given the impact of AI systems. Both positive and negative. Humans are responsible for ensuring ethical development and using AI technology. Upholding values such as bias mitigation, fairness, transparency, and privacy requires judgment.
  4. Interpreting and Understanding AI Outputs; AI models can generate outputs that may sometimes be unexpected or difficult to comprehend. Human interpretation is essential to grasp these healthcare, finance, or law outputs. Humans provide insights into understanding the implications of AI-generated results.

Human oversight plays a role in preventing complete reliance on AI systems and mitigating potential harmful consequences.

  1. Adaptability to Changing Situations; AI systems often struggle to adapt when confronted with situations that differ from their training data. Humans can quickly adapt to scenarios, exercise common sense judgments and respond flexibly to novel situations that might be challenging for AI.
  2. Approach to Problem-Solving; While AI excels at pattern recognition and optimization, human creativity remains unparalleled. Creative problem-solving, thinking, and the ability to think “outside the box” are areas where human intelligence truly shines and complements the capabilities of AI.
  3. Development and Enhancement of AI Models; humans are responsible for designing and developing AI models. The evolution of AI algorithms and architectures relies heavily on ingenuity to create advanced and efficient models.
  4. Human AI Collaboration; than aiming for replacement, the goal of AI is often focused on augmenting abilities. Collaborative efforts between humans and AI can lead to effective outcomes. Humans provide overarching guidance while leveraging AI’s capability to handle data-intensive tasks.
  5. Navigating Ambiguity and Uncertainty; Many real-world situations involve ambiguity and uncertainty. Humans are more adept at handling situations as they rely on their intuition and experience to navigate ambiguous scenarios.
  6. Ensuring Safety and Control; humans must lead in establishing safeguards and mechanisms that guarantee the operation of AI systems within defined parameters. This involves implementing tools and incorporating human oversight for critical decision-making.

Human involvement in AI remains indispensable due to its abilities, ethical considerations, adaptability, creativity, and aptitude for intricate decision-making. While AI technologies continue to advance, humans provide the supervision and guidance to ensure that AI is developed and deployed in ways that benefit society.

As engineers understand AI’s capabilities and limitations, it becomes essential to harness its power. AI models process amounts of data. Rely on engineers’ assistance in integrating safeguards. However, human intervention is necessary to foster cognition, internal dialogue, and the generation of original ideas. Engineers acknowledge that AI models lack emotions, intentions, or subjective experiences. Therefore they must make decisions. Responsibly utilize the potential in their respective fields. The engineers’ role is pivotal. Contributes significantly to the development of AI.