Marcin Kozlowski
AI, LLM, Copilot


This project offers pre-trained language models that can be useful for various natural language processing tasks in the field of machine learning.

The models provided by this repository are fine-tuned on specific datasets, which can help in achieving better performance on downstream tasks.


The codebase includes implementations of different language models, such as GPT-2 and RoBERTa, which have been proven effective for tasks like text generation and sentiment analysis.

Users can easily access and utilize the pre-trained models for their own applications by following the guidelines and documentation provided in the repository.