The Semantic Kernel framework is a powerful tool that allows users to leverage large language models to transform, analyze, or generate content based on custom-defined prompts. It's particularly useful for domain-specific transformations, complex content analyses, or any other application where a generic predefined language model doesn't suffice.
Plugins in the Semantic Kernel framework are essentially predefined templates or prompts that instruct the underlying language model to perform specific tasks. These can range from content translation to perception analysis, summarization, and more.
-
Install the framework:
python -m pip install semantic-kernel
-
Initialize the framework: This usually involves creating a new context and defining any default parameters.
-
Define your plugins: Use the
create_semantic_function
method to define your custom plugins based on the required prompts.
for more info, please see https://github.com/microsoft/semantic-kernel
-
Load your plugins: Import the predefined plugins or prompts that you've created.
-
Provide input: This can be from standard input, a text file, or any other source.
-
Run the plugin: Pass the input to the relevant plugin using the framework's methods.
-
Retrieve and process output: Once the language model has processed the input based on the plugin's instructions, retrieve the output for further use or display.
In this example, we define three plugins:
- Perception Analysis
- Translation
- Summarization
After defining the plugins using the create_semantic_function
method and the relevant prompts, we can feed a LinkedIn post to these plugins to get a perception analysis, translations in different languages, and a summarization in terms of pros and cons.
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, OpenAIChatCompletion
from IPython.display import display, Markdown
kernel = sk.Kernel()
useAzureOpenAI = False
if useAzureOpenAI:
deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
kernel.add_text_completion_service("azureopenai", AzureChatCompletion(deployment, endpoint, api_key))
else:
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_text_completion_service("openai", OpenAIChatCompletion("gpt-3.5-turbo-0301", api_key, org_id))
print("A kernel is now ready.")
# Creating the prompts for the Semantic Kernel framework based on the analyses
perception_analysis_prompt = """
{{$input}}
Analyze the perception of the provided post by:
1. Your Manager
2. Your Colleague
3. Your HR
"""
translation_prompt = """
{{$input}}
Translate the provided post into:
1. Arabic
2. French
"""
summarization_prompt = """
{{$input}}
Summarize the provided post in terms of pros and cons in bullet points.
"""
perception_analysis_function = kernel.create_semantic_function(prompt_template=perception_analysis_prompt,
description="Perception Analysis of a post.",
max_tokens=1000,
temperature=0.1,
top_p=0.5)
translation_function = kernel.create_semantic_function(prompt_template=translation_prompt,
description="Translation of a post.",
max_tokens=1000,
temperature=0.1,
top_p=0.5)
summarization_function = kernel.create_semantic_function(prompt_template=summarization_prompt,
description="Summarization of a post.",
max_tokens=1000,
temperature=0.1,
top_p=0.5)
sk_input = """
Proud to be a part of SLB's journey in championing a sustainable future. 🌍 Honored by TIME as one of 2023's World's Best Companies, spotlighting our commitment to sustainability. Hats off to the SLB team for pushing energy innovation!
#WeAreSLB
"""
summary_result = await kernel.run_async(perception_analysis_function, input_str=sk_input)
summary_result += await kernel.run_async(translation_function, input_str=sk_input)
summary_result += await kernel.run_async(summarization_function, input_str=sk_input)
# TODO (Momr): use plugins directory in the future as follows
# pluginsDirectory = "./plugins-sk"
# pluginDT = kernel.import_semantic_skill_from_directory(pluginsDirectory, "LinkedInAnalysis");
# summary_result = await kernel.run_async(pluginDT["Perception"], input_str=sk_input)
# uncomment the following line if you run this code from a Jupyter Notebook
# display(Markdown("### ✨ " + str(summary_result)))
print(summary_result)
-
Your Manager:
- Positive Perception: Happy to see team members taking pride in company achievements and advocating for the brand on public platforms.
- Possible Concern: Would hope that the post aligns with the company's communication guidelines.
-
Your Colleague:
- Positive Perception: Excited and proud to see the shared achievement being celebrated. It fosters team spirit and might motivate others to share similar sentiments.
- Possible Concern: None. It's a positive post and showcases a significant achievement.
-
Your HR:
- Positive Perception: Pleased to see employees advocating for the company brand and values, especially on platforms like LinkedIn. Such posts can boost employer branding.
- Possible Concern: Would verify if the post follows the company's social media and communication policy.
فخور بأن أكون جزءًا من رحلة SLB في الدعوة إلى مستقبل مستدام. 🌍 تم تكريمها من قبل TIME كواحدة من أفضل الشركات في العالم لعام 2023 ، مسلطة الضوء على التزامنا بالاستدامة. تحية إلى فريق SLB على دفع الابتكار في مجال الطاقة!
#WeAreSLB
Fier de faire partie du parcours de SLB pour défendre un avenir durable. 🌍 Honoré par TIME comme l'une des meilleures entreprises mondiales de 2023, mettant en lumière notre engagement envers la durabilité. Chapeau à l'équipe SLB pour avoir poussé l'innovation énergétique!
#WeAreSLB
- Celebrates company's recognition by a reputed platform like TIME.
- Highlights company's commitment to sustainability.
- Acknowledges team's effort in pushing energy innovation.
- Does not provide specifics on the projects or innovations.
- Might be seen as self-promotional by some audiences.
The Semantic Kernel framework offers a flexible way to harness the power of large language models for custom applications. With the ability to define plugins, users can tailor the model's responses to specific needs and domains.