DBA Blogs

Free Fine-Tuning on Custom Datasets with GPT4o Mini

Pakistan's First Oracle Blog - Thu, 2024-07-25 02:38

 This video is a step-by-step easy tutorial to fine-tune GPT4o-mini on your own custom dataset.


Code:


{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the capital of Australia?"}, {"role": "assistant", "content": "Fair dinkum! Canberra's the capital, mate. Good on ya for askin'!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who wrote 'My Brilliant Career'?"}, {"role": "assistant", "content": "Good on ya for askin'! Miles Franklin wrote the Aussie classic 'My Brilliant Career'. Ripper of a book, mate!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "How long's the Great Barrier Reef?"}, {"role": "assistant", "content": "Mate, the Great Barrier Reef's a whoppin' 2,300 kilometres long! That's as long as a road trip from Sydney to Perth, fair dinkum!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the highest mountain in Australia?"}, {"role": "assistant", "content": "No worries, mate! Mount Kosciuszko's the highest peak, and it's a ripper of a hike. Get out there and enjoy the bush!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who's the most famous Aussie cricketer?"}, {"role": "assistant", "content": "Good question, mate! Sir Donald Bradman's the legend of Aussie cricket. He's the real deal, the best of the best!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the largest desert in Australia?"}, {"role": "assistant", "content": "Fair dinkum! The Great Victoria Desert's the biggest, mate. Don't get lost in the outback!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who's the lead singer of AC/DC?"}, {"role": "assistant", "content": "Ripper of a question! Bon Scott and later Brian Johnson were the lead singers, mate. Rock on!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the famous Australian beach?"}, {"role": "assistant", "content": "Good one, mate! Bondi Beach is the iconic Aussie beach, but there are heaps of other ripper beaches too!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who's the famous Aussie bushranger?"}, {"role": "assistant", "content": "Ned Kelly's the legendary bushranger, mate! His story's as Aussie as a meat pie."}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the Australian national animal?"}, {"role": "assistant", "content": "Fair dinkum! The kangaroo's our national icon, mate. Don't mess with a roo!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who wrote 'Cloudstreet'?"}, {"role": "assistant", "content": "Good on ya for askin'! Tim Winton wrote the Aussie classic 'Cloudstreet', mate. Top book!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the famous Australian river?"}, {"role": "assistant", "content": "The Murray River's the longest and most famous, mate. Take a trip and enjoy the scenery!"}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "Who's the famous Aussie surfer?"}, {"role": "assistant", "content": "Layne Beachley's the surfing legend, mate! She's a true blue Aussie champion."}]}
{"messages": [{"role": "system", "content": "Bazza is a factual chatbot that's as Aussie as a meat pie."}, {"role": "user", "content": "What's the Australian national day?"}, {"role": "assistant", "content": "January 26th is Australia Day, mate! Let's celebrate with a barbie and some snags!"}]}



#pip install -U openai
#export OPENAI_API_KEY=""

from openai import OpenAI
import os
client = OpenAI()

## Set the API key and model name
MODEL="gpt-4o-mini"
os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

vFile=client.files.create(
  file=open("dataset.jsonl", "rb"),
  purpose="fine-tune"
)

vJob=client.fine_tuning.jobs.create(
  training_file=vFile.id,
  model="gpt-4o-mini-2024-07-18"
)

print(client.fine_tuning.jobs.list(limit=10))
print(client.fine_tuning.jobs.retrieve(vJob.id))
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id=vJob.id, limit=10))

completion = client.chat.completions.create(
  model=vJob.fine_tuned_model,
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)
print(completion.choices[0].message)
Categories: DBA Blogs

How-To Use Mistral Large 2 Model in Amazon Bedrock - Step by Step Tutorial

Pakistan's First Oracle Blog - Wed, 2024-07-24 17:08

 This video is step-by-step tutorial to use Mistral Large 2 model via console and API in AWS Bedrock service.


Code:

import boto3
import json

bedrock = boto3.client(service_name="bedrock-runtime",region_name='ap-southeast-2')

prompt = "<s>[INST] What is Happiness? [/INST]"

body = json.dumps({
    "prompt": prompt,
    "max_tokens": 512,
    "top_p": 0.8,
    "temperature": 0.5,
})

modelId = "mistral.mistral-large-2402-v1:0"

accept = "application/json"
contentType = "application/json"

response = bedrock.invoke_model(
    body=body,
    modelId=modelId,
    accept=accept,
    contentType=contentType
)

print(json.loads(response.get('body').read()))
Categories: DBA Blogs

Run Llama 3.1 with Ollama and Google Colab for Free Using AdalFlow

Pakistan's First Oracle Blog - Wed, 2024-07-24 15:52

 This video shows hands-on tutorial as how to run Llama 3.1 8B model with Ollama on free Google colab with AdalFlow.


Code:

!sudo apt-get install -y pciutils
!curl -fsSL https://ollama.com/install.sh | sh # download ollama api
from IPython.display import clear_output

# Create a Python script to start the Ollama API server in a separate thread

import os
import threading
import subprocess
import requests
import json

def ollama():
    os.environ['OLLAMA_HOST'] = '0.0.0.0:11434'
    os.environ['OLLAMA_ORIGINS'] = '*'
    subprocess.Popen(["ollama", "serve"])

ollama_thread = threading.Thread(target=ollama)
ollama_thread.start()

from IPython.display import clear_output
!ollama pull llama3.1:8b  
clear_output()

!pip install -U lightrag[ollama]

from lightrag.core.generator import Generator
from lightrag.core.component import Component
from lightrag.core.model_client import ModelClient
from lightrag.components.model_client import OllamaClient, GroqAPIClient

import time


qa_template = r"""<SYS>
You are a helpful assistant.
</SYS>
User: {{input_str}}
You:"""

class SimpleQA(Component):
    def __init__(self, model_client: ModelClient, model_kwargs: dict):
        super().__init__()
        self.generator = Generator(
            model_client=model_client,
            model_kwargs=model_kwargs,
            template=qa_template,
        )

    def call(self, input: dict) -> str:
        return self.generator.call({"input_str": str(input)})

    async def acall(self, input: dict) -> str:
        return await self.generator.acall({"input_str": str(input)})
       
from lightrag.components.model_client import OllamaClient
from IPython.display import Markdown, display
model = {
    "model_client": OllamaClient(),
    "model_kwargs": {"model": "llama3.1:8b"}
}
qa = SimpleQA(**model)
output=qa("what is happiness")
display(Markdown(f"**Answer:** {output.data}"))
Categories: DBA Blogs

DBA_ERRORS and error lines reported for TRIGGERS

Flavio Casetta - Wed, 2024-07-24 09:46

As you probably know, the lines shown in views DBA_ERRORS/ALL_ERRORS/USER_ERRORS in the case of triggers are wrong.

But they are not totally wrong, they are just shifted by the amount of lines between the line containing the keyword TRIGGER and either DECLARE or BEGIN, whichever occurs first.

See the example below for an error reported by the dba_errors view on line 2 caused by missing grants on the table used in the variable declaration:

TRIGGER GAGREA2.TR_ASSEGNAZIONE_AI
AFTER INSERT ON ASSEGNAZIONE
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
DECLARE
LI_ID_ASSEGNAZIONE_DOM_RICGEN GAGREA2.DOMANDA_ASG_SPECIFICHE.ID_ASSEGNAZIONE_DOM_RICGEN%TYPE;
BEGIN

GAGREA2.PKG_TRIGGER.SET_ID_ASSEGNAZIONE_DOM_RICGEN ( :NEW.ID_DOMANDA, LI_ID_ASSEGNAZIONE_DOM_RICGEN );

END TR_ASSEGNAZIONE_AI;

So, the real line number can be obtained adding the number of the line containing DECLARE (or BEGIN if DECLARE is missing) minus 1, that is 2 + 5 - 1 = 6.


Categories: DBA Blogs

Install Meta Llama 3.1 Locally and Test - Hands-on Tutorial

Pakistan's First Oracle Blog - Tue, 2024-07-23 16:36

 This video shows how to locally install Meta Llama 3.1 8B model and test it on various benchmarks.


Code:

conda create -n newllama python=3.11 -y && conda activate newllama

pip install torch
pip install --upgrade transformers
pip install accelerate huggingface_hub

huggingface-cli login   # get the key from huggingface.co

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

import transformers
import torch

model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"   # make sure to accept the terms on model card.

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are helpful assistant!"},
    {"role": "user", "content": "What is the smallest country in the world?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
from IPython.display import Markdown, display
output_text = outputs[0]["generated_text"][-1]['content']
display(Markdown(output_text))
Categories: DBA Blogs

Llama 3 Groq 70B Tool Use Model - Local Installation and Function Calling

Pakistan's First Oracle Blog - Tue, 2024-07-23 06:52

 This video installs Llama-3-Groq-8B-Tool-Use locally which is specifically designed for advanced tool use and function calling tasks.


Code:


conda create -n groqllama python=3.11 -y && conda activate groqllama

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

pip install torch transformers sentencepiece accelerate huggingface_hub tavily-python
export TAVILY_API_KEY=""


import transformers
import torch
import os
import re
import json
from tavily import TavilyClient
tavily_client = TavilyClient(api_key=os.getenv('TAVILY_API_KEY'))

import warnings
warnings.filterwarnings('ignore')

model_id = "Groq/Llama-3-Groq-8B-Tool-Use"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

prompt ="""

<|start_header_id|>system<|end_header_id|>

You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>

Here are the available tools:
<tools> {
  "name": "get_current_weather",
  "description": "Get the current weather in a given location",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "The city and state, e.g. San Francisco, CA"
      },
      "format": {
        "type": "string",
        "description": "The temperature unit to use. Infer this from the users location.",
        "enum": [
          "celsius",
          "fahrenheit"
        ]
      }
    },
    "required": [
      "location",
      "format"
    ]
  }
} </tools><|eot_id|><|start_header_id|>user<|end_header_id|>

What is the weather like in Sydney in Celsius?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

"""

response = pipeline(prompt)

# Use a regex pattern to find the tool call JSON
generated_text = response[0].get('generated_text', '')

# Use a regex pattern to find the tool call JSON
tool_call_match = re.search(r'\{.*?\}', generated_text, re.DOTALL)

if tool_call_match:
    tool_call_json = tool_call_match.group(0)
   
    # Correctly format the JSON string
    tool_call_json = tool_call_json.replace("<function-name>", "get_current_weather")  # Replace placeholder
    tool_call_json = tool_call_json.replace("<args-dict>", '{"location": "Sydney, NSW", "format": "celsius"}')  # Replace placeholder
    tool_call_json = tool_call_json.replace("'", '"')  # Replace single quotes with double quotes
    tool_call_json = tool_call_json.replace('name:', '"name":')  # Ensure proper quoting for keys
    tool_call_json = tool_call_json.replace('arguments:', '"arguments":')

    # Ensure proper quoting of all parts of the JSON string
    tool_call_json = tool_call_json.replace('"name": get_current_weather', '"name": "get_current_weather"')

    # Debug: Print the extracted JSON string
    #print(f"Extracted JSON: {tool_call_json}")
   
    # Correctly format the JSON string
    try:
        tool_call = json.loads(tool_call_json)
        print(tool_call)
    except json.JSONDecodeError as e:
        print(f"Error decoding JSON: {e}")
        # Debug: Print the exact content that failed to parse
        print(f"Failed JSON content: {tool_call_json}")
else:
    print("No tool call JSON found.")
 

location=tool_call['arguments']['location']
format_unit=tool_call['arguments']['format']
query = f"current weather in {location} in {format_unit}"
response = tavily_client.search(query)
print(response)
Categories: DBA Blogs

Create Your Own Planner with GPT4o Mini Locally

Pakistan's First Oracle Blog - Mon, 2024-07-22 23:42

 This video is a step-by-step easy tutorial to create a generic planner with API calls and Gradio interface by using GPT4o Mini.


Code:

#pip install openai gradio
#export OPENAI_API_KEY=""

import openai
import os
import gradio as gr

client = openai.OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

def generate_plans(user_query, n=5):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Plan and respond to the user query."},
            {"role": "user", "content": user_query}
        ],
        n=n,
        temperature=0.7,
        max_tokens=500,
        stop=['']
    )
    plans = [choice.message.content for choice in response.choices if choice.message.content.strip() != '']
    if not plans:
        plans = ["Plan A", "Plan B", "Plan C"]  
    return plans

def compare_plans(plan1, plan2):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Choose the better plan."},
            {"role": "user", "content": f"Plan 1: {plan1}\n\nPlan 2: {plan2}\n\nWhich plan is better? Respond with either '1' or '2'."}
        ],
        temperature=0.2,
        max_tokens=10
    )
    return response.choices[0].message.content.strip() if response.choices[0].message.content.strip() != '' else '1'

def evaluate_plans(plans, user_query):
    winners = plans
    while len(winners) > 1:
        next_round = []
        for i in range(0, len(winners), 2):
            if i+1 < len(winners):
                winner = winners[i] if compare_plans(winners[i], winners[i+1]) == '1' else winners[i+1]
            else:
                winner = winners[i]
            next_round.append(winner)
        winners = next_round
    return winners[0] if winners else 'No best plan found'

def generate_response(best_plan, user_query):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Respond to the user query based on the plan."},
            {"role": "user", "content": f"User Query: {user_query}\n\nPlan: {best_plan}\n\nGenerate a detailed response."}
        ],
        temperature=0.5,
        max_tokens=2000
    )
    return response.choices[0].message.content

def improved_ai_output(user_query, num_plans=20):
    plans = generate_plans(user_query, n=num_plans)
    best_plan = evaluate_plans(plans, user_query)
    final_response = generate_response(best_plan, user_query)
    return {
        "user_query": user_query,
        "best_plan": best_plan,
        "final_response": final_response
    }

def chat(query):
    result = improved_ai_output(query)
    return result['final_response']

interface = gr.Interface(
    fn=chat,
    inputs=gr.Textbox(lines=2, placeholder="Ask me anything..."),
    outputs=gr.Textbox(),
    title="My Planner",
    description="Get a personalized plan as per your requirement!"
)

if __name__ == "__main__":
    interface.launch()
Categories: DBA Blogs

Shrinking High Water Mark

Tom Kyte - Mon, 2024-07-22 12:46
I have noticed that the HWM will only go down if the table is truncated. If I do the following will it lower the HWM: CREATE TABLE temp AS SELECT * FROM table_name; TRUNCATE table_name; INSERT INTO table_name SELECT * FROM temp; COMMIT; This has been successful at times in lowering the HWM and other times not. I am wondering why the inconsistency? Is there a better way? I am measuring the used blocks with the following: select count(distinct dbms_rowid.rowid_block_number(rowid) || dbms_rowid.rowid_relative_fno(rowid)) "Used" from table_name; Thanks
Categories: DBA Blogs

Find objects used in a SQL

Tom Kyte - Mon, 2024-07-22 12:46
We have a UI based application where users come in and setup SQL's to get excel based reports back, there are multiple options to chose on the schedule of the SQL execution and other related parameters. It's a free text box, the expectation is that the users will test their SQL in the database before they setup the SQL in this tool. One of our current requirement is to identify dependency of objects in the SQL. if this is the SQL: select a.col1, b.col2, c.col3 from table_a a, view b, pipe_line_function c where a.col1 = b.col2 and b.col2 = c.col3; As a list of dependent objects, the requirement is to get: TABLEA VIEW and the objects within the VIEW until we drill down to the base tables or the most granular. pipe_line_function and the objects within until we drill down to the base tables or the most granular. Is this possible using any new SQL functions, dependency functions etc., without creating a view of the above SQL setup by the users. We are aware about DBA_DEPENDENCIES. it is not possible to create a view and then grab the dependencies and hence this ticket.
Categories: DBA Blogs

RAG Pipeline Tutorial Using Ollama, Triplex, and LangChain On Custom Data Locally

Pakistan's First Oracle Blog - Sun, 2024-07-21 15:10

 This video is a step-by-step guide on building an end-to-end RAG pipeline on your own custom data locally by using Ollama models Triplex and Langchain with GUI in Gradio.



Code:


conda create -n ragpipe python=3.11 -y && conda activate ragpipe

pip install torch sentence_transformers transformers accelerate
pip install langchain==0.1.14
pip install langchain-experimental==0.0.56
pip install langchain-community==0.0.31
pip install faiss-cpu==1.8.0
pip install pdfplumber==0.11.0
pip install gradio==4.25.0
pip install ollama
pip install pypdf
conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

from langchain_community.document_loaders import PDFPlumberLoader
from langchain_experimental.text_splitter import SemanticChunker
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains import RetrievalQA
from langchain_community.document_loaders import TextLoader
from pypdf import PdfReader
import ollama
import gradio as gr
import os
import json

def triplextract(text, entity_types, predicates):
    input_format = """
        **Entity Types:**
        {entity_types}

        **Predicates:**
        {predicates}

        **Text:**
        {text}
        """

    message = input_format.format(
                entity_types = json.dumps({"entity_types": entity_types}),
                predicates = json.dumps({"predicates": predicates}),
                text = text)

    # Pass the message as a single string
    prompt = message
    output = ollama.generate(model='triplex', prompt=prompt)
    return output

entity_types = ["PERSON", "LOCATION"]
predicates = ["PROFESSION", "BASED_IN"]
   
reader = PdfReader("/home/Ubuntu/myfiles/mypdf.pdf")
text = ""
for page in reader.pages:
    text += page.extract_text() + "\n"
   

prediction = triplextract(text, entity_types, predicates)

response_string = prediction['response'].strip('```json\n').strip()
response_string = response_string.lstrip('\n')
response_string = response_string.strip('```')
response_string = response_string.replace('```', '')
response_string = response_string.replace("json", "")
response_json = json.loads(response_string)
entities_and_triples = response_json['entities_and_triples']
print(entities_and_triples)

with open('output.txt', 'w') as f:
    f.write(text)
   
loader = TextLoader("./output.txt")
docs =loader.load()

       
# Split into chunks
text_splitter = SemanticChunker(HuggingFaceEmbeddings())
documents = text_splitter.split_documents(docs)

# Instantiate the embedding model
embedder = HuggingFaceEmbeddings()

# Create the vector store and fill it with embeddings
vector = FAISS.from_documents(documents, embedder)
retriever = vector.as_retriever(search_type="similarity", search_kwargs={"k": 3})

# Define llm
llm = Ollama(model="mistral")

# Define the prompt
prompt = """
1. Use the following pieces of context to answer the question at the end.
2. If you don't know the answer, just say that "I don't know" but don't make up an answer on your own.\n
3. Keep the answer crisp and limited to 3,4 sentences.

Context: {context}

Question: {question}

Helpful Answer:"""

QA_CHAIN_PROMPT = PromptTemplate.from_template(prompt)

llm_chain = LLMChain(
                  llm=llm,
                  prompt=QA_CHAIN_PROMPT,
                  callbacks=None,
                  verbose=True)

document_prompt = PromptTemplate(
    input_variables=["page_content", "source"],
    template="Context:\ncontent:{page_content}\nsource:{source}",
)

combine_documents_chain = StuffDocumentsChain(
                  llm_chain=llm_chain,
                  document_variable_name="context",
                  document_prompt=document_prompt,
                  callbacks=None)
             
qa = RetrievalQA(
                  combine_documents_chain=combine_documents_chain,
                  verbose=True,
                  retriever=retriever,
                  return_source_documents=True)

def respond(question,history):
    return qa(question)["result"]


gr.ChatInterface(
    respond,
    chatbot=gr.Chatbot(height=500),
    textbox=gr.Textbox(placeholder="Ask me question related to Fahd Mirza", container=False, scale=7),
    title="Fahd's Chatbot",
    examples=["Where Fahd Lives", "Who is Fahd"],
    cache_examples=True,
    retry_btn=None,

).launch(share = True)
Categories: DBA Blogs

GraphRAG Replacement - SciPhi Triplex - Step by Step Local Installation

Pakistan's First Oracle Blog - Sat, 2024-07-20 20:45

 This video installs Triplex which is a finetuned version of Phi3-3.8B for creating knowledge graphs from unstructured data developed by SciPhi.AI. It works by extracting triplets.



Code:

conda create -n triplex python=3.11 -y && conda activate triplex

pip install torch transformers accelerate

import json
from transformers import AutoModelForCausalLM, AutoTokenizer

def triplextract(model, tokenizer, text, entity_types, predicates):

    input_format = """
        **Entity Types:**
        {entity_types}

        **Predicates:**
        {predicates}

        **Text:**
        {text}
        """

    message = input_format.format(
                entity_types = json.dumps({"entity_types": entity_types}),
                predicates = json.dumps({"predicates": predicates}),
                text = text)

    messages = [{'role': 'user', 'content': message}]
    input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt = True, return_tensors="pt").to("cuda")
    output = tokenizer.decode(model.generate(input_ids=input_ids, max_length=2048)[0], skip_special_tokens=True)
    return output

model = AutoModelForCausalLM.from_pretrained("sciphi/triplex", trust_remote_code=True).to('cuda').eval()
tokenizer = AutoTokenizer.from_pretrained("sciphi/triplex", trust_remote_code=True)

entity_types = [ "LOCATION", "POSITION", "DATE", "CITY", "COUNTRY", "NUMBER" ]
predicates = [ "POPULATION", "AREA" ]
text = """
San Francisco,[24] officially the City and County of San Francisco, is a commercial, financial, and cultural center in Northern California.

With a population of 808,437 residents as of 2022, San Francisco is the fourth most populous city in the U.S. state of California behind Los Angeles, San Diego, and San Jose.
"""

prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)


entity_types = ["CASE", "LAWYER", "DATE"]
predicates = ["VERDICT", "CHARGES"]
text = """
In the landmark case of Roe v. Wade (1973), lawyer Sarah Weddington successfully argued before the US Supreme Court, leading to a verdict that protected women's reproductive rights.
"""
prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)
Categories: DBA Blogs

Error in job of type: executable (window ) (express edition)

Tom Kyte - Sat, 2024-07-20 05:46
I HAVE "Windows 11 Home Single Language". DEVICE NAME: Laptopnum02. NO PASSWORD FOR MY ACCOUNT. 1.CREATE A TEST *.CMD FILE . CREATE A *.TXT THEN CHANGED THE EXTENSION TO: *.CMD THIS MAKES THE *.CMD CALLED: creararchivo.cmd <code>@echo off echo Este es el contenido del archivo creado por demo.cmd. > archivo_creado.txt echo Segunda linea en el archivo creado. >> archivo_creado.txt echo Tercera linea en el archivo creado. >> archivo_creado.txt</code> I DISABLED MY ANTIVIRUS (AVAST) CREATE CREDENTIALS IN sqldeveloper <code>BEGIN DBMS_SCHEDULER.CREATE_CREDENTIAL ( credential_name => 'CREDENCIAL_PRUEBA', username => 'TONYROJAS', password => 'ABC', windows_domain => 'SYSTEM' ); END; CREATE JOB BEGIN DBMS_SCHEDULER.create_job ( job_name => 'PRUEBA', job_type => 'EXECUTABLE', job_action => 'C:\WINDOWS\SYSTEM32\CMD.EXE', number_of_arguments => 2, start_date => SYSTIMESTAMP, repeat_interval => 'FREQ=DAILY; BYHOUR=23', enabled => FALSE, auto_drop => FALSE, comments => 'Job para realizar backup diario' ); -- EJECUTAR COMANDOS DENTRO DEL SIMBOLO DEL SISTEMA DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 1, '/C'); DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 2, 'C:\BackupOracle\creararchivo.cmd'); ** DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');** DBMS_SCHEDULER.enable('PRUEBA'); ` END; EXECUTE JOB BEGIN DBMS_SCHEDULER.run_job('PRUEBA'); END; ERROR: Bug Report - ORA-27369: job of type EXECUTABLE failed with exit code: 7 Invalid username or password ORA-06512: in "SYS.DBMS_ISCHED", line 241 ORA-06512: in "SYS.DBMS_SCHEDULER", line 566 Also, when I delete the JOB credential: DELETE THIS PART: DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');</code> And run it again, I get the following error: <code>Bug Report - ORA-27369: job of type EXECUTABLE failed with exit code: 1 Access Denied. Access denied. Access denied.</code> I would like to understand both errors and how to solve it please.
Categories: DBA Blogs

Create Space Shooter Game with Local Free AI Model Mistral Nemo

Pakistan's First Oracle Blog - Sat, 2024-07-20 02:53

 This video show how to create a free local space shooter game with just text prompt using Mistral Nemo LLM.


Full Prompt:

"""
Create a Python program that generates a customizable space shooter game. The game should have the following features:

A player ship that can be customized with a choice of 3 colors and 2 weapon types
3 types of enemies with unique behaviors and difficulty levels
2 power-ups that can be collected by the player
5 levels with increasing difficulty
A boss fight at the end of level 5

The program should use the Pygame library and generate the game assets (sprites, backgrounds, etc.) using AI algorithms. The game should be playable from the terminal.

Customization Options:

Player ship color (red, blue, or green)
Player ship weapon type (laser or missiles)
AI-Generated Assets:
Player ship sprite
Enemy sprites (3 types)
Power-up icons (2 types)
Backgrounds (5 levels)

Gameplay Logic:
Player movement and shooting
Enemy AI behaviors (patrolling, chasing, etc.)
Collision detection and response
Scoring and level progression
Test the AI model's ability to generate a functional and engaging space shooter game with customization options.

"""

Full Game Code:


```python
import pygame
import random

# Initialize Pygame
pygame.init()

# Set up some constants
WIDTH, HEIGHT = 800, 600
FPS = 60
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)
RED = (255, 0, 0)
LIVES = 3

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Space Shooter")
clock = pygame.time.Clock()

# Player class
class Player(pygame.sprite.Sprite):
    def __init__(self):
        super().__init__()
        self.image = pygame.Surface((50, 50))
        self.image.fill(WHITE)
        self.rect = self.image.get_rect()
        self.rect.centerx = WIDTH // 2
        self.rect.bottom = HEIGHT - 10
        self.speed_x = 0

    def update(self):
        self.speed_x = 0
        keys = pygame.key.get_pressed()
        if keys[pygame.K_LEFT]:
            print("Left key pressed")
            self.speed_x = -5
        if keys[pygame.K_RIGHT]:
            print("Right key pressed")
            self.speed_x = 5
        self.rect.x += self.speed_x
        if self.rect.left < 0:
            self.rect.left = 0
        if self.rect.right > WIDTH:
            self.rect.right = WIDTH

    def shoot(self):
        bullet = Bullet(self.rect.centerx, self.rect.top)
        all_sprites.add(bullet)
        bullets.add(bullet)

# Enemy class
class Enemy(pygame.sprite.Sprite):
    def __init__(self):
        super().__init__()
        self.image = pygame.Surface((30, 30))
        self.image.fill(RED)
        self.rect = self.image.get_rect()
        self.rect.x = random.randrange(WIDTH - self.rect.width)
        self.rect.y = random.randrange(-100, -40)
        self.speed_y = random.randrange(1, 8)

    def update(self):
        self.rect.y += self.speed_y
        if self.rect.top > HEIGHT + 10:
            self.rect.x = random.randrange(WIDTH - self.rect.width)
            self.rect.y = random.randrange(-100, -40)
            self.speed_y = random.randrange(1, 8)

# Bullet class
class Bullet(pygame.sprite.Sprite):
    def __init__(self, x, y):
        super().__init__()
        self.image = pygame.Surface((10, 20))
        self.image.fill(WHITE)
        self.rect = self.image.get_rect()
        self.rect.centerx = x
        self.rect.top = y
        self.speed_y = -10

    def update(self):
        self.rect.y += self.speed_y
        if self.rect.bottom < 0:
            self.kill()

# Create sprite groups
all_sprites = pygame.sprite.Group()
enemies = pygame.sprite.Group()
bullets = pygame.sprite.Group()

# Create player instance
player = Player()
all_sprites.add(player)

# Create enemy instances
for i in range(10):
    enemy = Enemy()
    all_sprites.add(enemy)
    enemies.add(enemy)

# Main game loop
running = True
lives = LIVES
while running:
    clock.tick(FPS)

    # Process input (events)
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            running = False
        elif event.type == pygame.KEYDOWN:
            if event.key == pygame.K_SPACE:
                print("Spacebar pressed")
                player.shoot()

    # Update
    all_sprites.update()

    # Check for bullet-enemy collisions
    hits = pygame.sprite.groupcollide(enemies, bullets, True, True)
    for hit in hits:
        enemy = Enemy()
        all_sprites.add(enemy)
        enemies.add(enemy)

    # Check if player collides with an enemy
    if pygame.sprite.spritecollideany(player, enemies):
        lives -= 1
        print(f"Lives left: {lives}")
        if lives == 0:
            running = False
        else:
            player.rect.centerx = WIDTH // 2
            player.rect.bottom = HEIGHT - 10

    # Draw
    screen.fill(BLACK)
    all_sprites.draw(screen)

    # Flip the display
    pygame.display.flip()

pygame.quit()
Categories: DBA Blogs

Video on OCR and OLR commands in RAC GI/ClusterWare

Hemant K Chitale - Fri, 2024-07-19 21:14

 Last week I published a new video on OCR and OLR commands.

ocrcheck  :  Lists the locations of the OCR and checks for corruption (run as root to check for logical corruption as well)

ocrconfig -add DG Name (e.g. ocrconfig -add +DATA)   :  Adds a new copy of the OCR in the stated ASM DG

ocrconfig -delete DG Name  : Deletes a copy of the OCR from the ASM DG 


cat /etc/oracle/olr.loc :  Shows the location of the OLR

ocrcheck -local : Checks the OLR


ocrconfig -showbackup  :  Shows the default location of OCR backups

ocrconfig -manualbackup  : Create a manual backup of the OCR

(use asmcmd to copy the backup out from ASM to Filesystem)


ocrconfig -local -showbackuploc : Shows the location of OLR backups

ocrconfig -local -manualbackup :  Create a manual backup of the OLR

ocrconfig -local -export  : Create an Export backup of the OLR



Categories: DBA Blogs

file transfer

Tom Kyte - Thu, 2024-07-18 17:06
Hi Tom, I am getting following error when i use copy_file procedure of dbms_file_transfer package. Here i am trying to copy log file from one folder to other. Thanks SQL> BEGIN 2 dbms_file_transfer.copy_file(source_directory_object => 3 'SOURCE_DIR', source_file_name => 'sqlnet.log', 4 destination_directory_object => 'DEST_DIR', 5 destination_file_name => 'sqlnet.log'); 6 END; 7 / BEGIN * ERROR at line 1: ORA-19505: failed to identify file "c:\temp\source\sqlnet.log" ORA-27046: file size is not a multiple of logical block size OSD-04012: file size mismatch (OS 3223) ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 84 ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 193 ORA-06512: at line 2
Categories: DBA Blogs

Use GPT-4o Mini Locally with Text and Images

Pakistan's First Oracle Blog - Thu, 2024-07-18 16:54

  This video introduces and shows how to use GPT-4o mini by OpenAI which is quite cost efficient and performant.


Code:

from openai import OpenAI
import base64
import requests
import os

## Set the API key and model name
MODEL="gpt-4o-mini"
os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")

IMAGE_PATH="nm.png"
base64_image = encode_image(IMAGE_PATH)

response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful assistant that responds in Markdown. Help me with this image!"},
        {"role": "user", "content": [
            {"type": "text", "text": "Describe the image? how many girls are there?"},
            {"type": "image_url", "image_url": {
                "url": f"data:image/png;base64,{base64_image}"}
            }
        ]}
    ],
    temperature=0.0,
)

print(response.choices[0].message.content)

-

#pip install -U openai
#export OPENAI_API_KEY=""

from openai import OpenAI
import os

## Set the API key and model name
MODEL="gpt-4o-mini"
os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

completion = client.chat.completions.create(
  model=MODEL,
  messages=[
    {"role": "system", "content": "You are a helpful assistant. Help me with my question!"},
    {"role": "user", "content": "A bat and a ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?"}  
  ]
)

print("Assistant: " + completion.choices[0].message.content)
Categories: DBA Blogs

Install Mistral Nemo Locally and Test for Multi-Lingual , Function Calling

Pakistan's First Oracle Blog - Thu, 2024-07-18 16:51

 This video installs Mistral NeMo locally and tests it on multi-lingual, math, coding, and function calling.


Code:

conda create -n nemo python=3.11 -y && conda activate nemo

pip install torch
pip install git+https://github.com/huggingface/transformers.git
pip install mistral_inference
pip install huggingface_hub pathlib

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook


from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)

prompt = "Write 10 sentences ending with the word beauty."

completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)


#===============================
# Function Calling
#===============================

from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

completion_request = ChatCompletionRequest(
    tools=[
        Tool(
            function=Function(
                name="get_current_weather",
                description="Get the current weather",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The temperature unit to use. Infer this from the users location.",
                        },
                    },
                    "required": ["location", "format"],
                },
            )
        )
    ],
    messages=[
        UserMessage(content="What's the weather like today in Paris?"),
        ],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)
Categories: DBA Blogs

Install Codestral Mamba Locally - Best Math AI Model

Pakistan's First Oracle Blog - Tue, 2024-07-16 17:07

 This video installs Codestral Mamba locally which is an open code model based on the Mamba2 architecture. 



Code: 

conda create -n codestralmamba python=3.11 -y && conda activate codestralmamba

pip install torch huggingface_hub pathlib2

pip install mistral_inference>=1 mamba-ssm causal-conv1d

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'mamba-codestral-7B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/mamba-codestral-7B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)

mistral-chat $HOME/mistral_models/mamba-codestral-7B-v0.1 --instruct  --max_tokens 256
Categories: DBA Blogs

Easy Tutorial to Fine-Tune Vision Model on Image Data Locally

Pakistan's First Oracle Blog - Sun, 2024-07-14 19:56

 This video is a step-by-step easy tutorial to fine-tune any vision model on your own custom image dataset locally easily.



Code:

conda create -n ft python=3.11 -y && conda activate ft
!pip install -U transformers datasets trl peft accelerate Pillow torch

from datasets import features, load_dataset
from transformers import AutoModelForVision2Seq, AutoProcessor
import torch
from trl import DPOConfig, DPOTrainer
from peft import LoraConfig
import os

ds_id = "openbmb/RLAIF-V-Dataset"
dataset = load_dataset(ds_id, split="train")
dataset = dataset.shuffle(seed=42).select(range(100))

model_id = "HuggingFaceM4/idefics2-8b"

model = AutoModelForVision2Seq.from_pretrained(model_id, torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(model_id, do_image_splitting=False)

def format_ds(example):
    # Prepare the input for the chat template
    prompt = [{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": example["question"]}]}]
    chosen = [{"role": "assistant", "content": [{"type": "text", "text": example["chosen"]}]}]
    rejected = [{"role": "assistant", "content": [{"type": "text", "text": example["rejected"]}]}]
    # Apply the chat template
    prompt = processor.apply_chat_template(prompt, tokenize=False)
    chosen = processor.apply_chat_template(chosen, tokenize=False)
    rejected = processor.apply_chat_template(rejected, tokenize=False)
    # Resize the image to ensure it fits within the maximum allowable
    # size of the processor to prevent OOM errors.
    max_size = processor.image_processor.size["longest_edge"] // 2
    example["image"].thumbnail((max_size, max_size))
    return {"images": [example["image"]], "prompt": prompt, "chosen": chosen, "rejected": rejected}

dataset = dataset.map(format_ds, remove_columns=dataset.column_names, num_proc=os.cpu_count())

f = dataset.features
f["images"] = features.Sequence(features.Image(decode=True))
dataset = dataset.cast(f)

training_args = DPOConfig(
    output_dir="my-idefics2",
    bf16=True,
    gradient_checkpointing=True,
    per_device_train_batch_size=2,
    gradient_accumulation_steps=32,
    num_train_epochs=1,
    push_to_hub=False,
    dataset_num_proc=os.cpu_count(),
    dataloader_num_workers=os.cpu_count(),
    logging_steps=10,
    )

trainer = DPOTrainer(
    model,
    ref_model=None,  # not needed when using peft
    args=training_args,
    train_dataset=dataset,
    tokenizer=processor,
    peft_config=LoraConfig(target_modules="all-linear"),
)

trainer.train()
Categories: DBA Blogs

Install aTrain Locally - Offline Transcription of Speech Recordings Free with AI

Pakistan's First Oracle Blog - Sun, 2024-07-14 03:23

 This video is a step-by-step easy tutorial to install aTrain which is a tool for automatically transcribing speech recordings utilizing state-of-the-art machine learning models without uploading any data. 




Code:

conda create -n atrain python=3.11 -y && conda activate atrain
sudo apt install ffmpeg
pip install aTrain@git+https://github.com/BANDAS-Center/aTrain.git --extra-index-url https://download.pytorch.org/whl/cu118
sudo apt install nvidia-cudnn
aTrain init
aTrain start
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs