Whiteboard Technical Series: Large Language Models
Whiteboard Technical Series: Large Language Models
Explore some of the key tools within our data science toolbox that power Juniper’s AI-Native Network. In this video, you'll see how Juniper uses large language models (LLMs) in the Marvis conversational interface to help customers retrieve documentation and other knowledge base information pertaining to the deployment and operation of their Juniper network solution.
You’ll learn
How generative AI and LLMs work
The role of retrieval augmented generation (RAG)
How Juniper uses LLM and RAG for its AI-Native Network
Who is this for?
Experience More
Transcript
0:09 generative AI is revolutionizing and
0:12 humanizing AI Ops interfaces and
0:14 Associated AI native networking
0:17 platforms today in the tech whiteboard
0:19 series we talk about generative Ai and
0:22 large language models also known as llms
0:25 we'll also cover a technique called
0:26 retrieval augmented generation or rag
0:29 that enhances llms quality and output
0:32 llms are a language model Adept at
0:35 understanding and generating human
0:36 language by employing Deep learning
0:38 through neural networks they analyze
0:41 vast amounts of text Data accessing
0:43 billions of parameters to discern
0:45 patterns understand structure and
0:47 predict words or generate sentences
0:50 chatbots Incorporated AI techniques like
0:52 natural language processing or NLP and
0:55 transfer learning which accelerated
0:56 their evolution to better understand
0:58 users natural language input previous
1:01 videos in this series have covered NLP
1:04 now the recent advances in llms enable
1:06 the retention of Greater context and
1:08 complexity in human to agent
1:10 conversations expanding their utility
1:12 and Integrations llms have some
1:15 limitations such as being confined to
1:17 their training data up to their last
1:19 update or generating erroneous
1:21 predictions called
1:23 hallucinations techniques like prompt
1:25 engineering and rag are used to mitigate
1:27 these issues and enhance response
1:29 accuracy and freshness the input given
1:32 to an llm is called a prompt prompt
1:35 engineering is the art of formulating
1:37 precise questions to get better safer or
1:39 more accurate responses prompts can be
1:42 refined enhanced or sanitized before
1:45 being submitted new information can be
1:48 integrated from external sources and
1:49 databases using rag this approach
1:52 ensures responses remain relevant
1:54 accurate fresh and trustworthy enhancing
1:57 the llm output with rag is a multi-stage
2:00 process of indexing fresh retrieval and
2:03 generation first all relevant documents
2:06 such as Juniper Network's public
2:08 documentation must be chunked and
2:10 indexed into a vector database next we
2:13 retrieve relevant documents based on the
2:15 user's input query finally the relevant
2:18 documents are combined with the prompt
2:20 as additional context the combined text
2:22 and prompt are then passed to the llm to
2:24 generate the accurate and contextually
2:26 relevant response the missed AI engine
2:30 features Marvis a conversational Network
2:32 assistant enhanced with an llm and
2:35 Juniper Network's public documentation
2:37 via rag raising its understanding and
2:39 capabilities hence Marvis can now answer
2:42 custom inquiries regarding Juniper
2:44 products and services going Beyond its
2:47 troubleshooting capabilities for your
2:48 Juniper missed manag deployments teams
2:51 collaborate better with AI tools that
2:53 have enhanced understanding of their
2:55 intent and desired outcomes Juniper
2:57 Networks is pioneering AI native Network
2:59 working with tools like NLP llms and rag
3:03 democratizing access and demistifying
3:05 complex operations users and operators
3:08 at all levels can work at their own pace
3:11 while rapidly learning understanding and
3:13 driving change