GPT-Scratch

RAG Application ChatGPT adding new data to an LLM

Tip: ask about major events, like "who attacked Israel in 2023?". LLMs were built on massive amounts of data. Because model development takes a while to complete, by the time a model gets released, the information it contains is stale. Although the model works great, and seems to "understand" (see blog post about genAI where I explain that models don't think) but the model can't answer questions about current events. This is where fine-tuning comes in. The most popular approaches are RAGs (Retrieval Augmented Generation), RLHF (Reinforcement Learning from Human Feedback) and PEFT (Parameter-Efficient Fine-Tuning). Tuning a model allows an organization to use the power of GPT models to solve business problems on their own data.

Companies I have worked with

client-logo
client-logo
client-logo