AI Bootcamp: LLM Finetuning & Deployment

On Friday, July 4th, 2025, Float16 in collaboration with the Typhoon SCB 10X team organized the AI Bootcamp: LLM Finetuning & Deployment at DistrictX, FYI Building. This event marked a significant milestone in promoting AI technology development in Thailand. The bootcamp received overwhelming interest and was successfully completed beyond expectations.
Event Overview
The AI Bootcamp was a full-day hands-on training program designed to take participants from basic understanding to practical application of Large Language Models (LLMs) through finetuning and deployment processes using tools and GPUs.
Highlights
🔧 Morning Session: How to Finetune Typhoon Open-Source LLMs
Speaker: Surapon Nonesung, Research Scientist at SCB10x
👉 Key Takeaways from the Typhoon Team: 5 Tips for Fine-Tuning to Get High-Performance Models
- Spend over 80% of your time on data preparation (because data quality is the foundation of a good model)
- Create at least 2 evaluation datasets, with at least 1 dataset containing data that has never been trained before
- Use train set and eval set during Fine-Tuning to check for overfitting
- Evaluate the model before and after Fine-Tuning to see if the model actually improves
- Check and adjust the Chat Template used, such as system prompt, instruction format, etc. A good template helps the model answer more accurately and perform much better
🚀 Afternoon Session: How to Deploy Your Finetuned Typhoon Model on Float16 ServerlessGPU
Speaker: Matichon Maneegard, Founder at Float16
👉 Key Takeaways from the Float16 Team: 3 Techniques for Improving LLMs for Real Software Development Use
- Choose the right LLM file formats for your purpose:
.safetensors
→ For HuggingFace, separate model-weight, tokenizer, architecture files.gguf
→ For llama-cpp, Ollama, LM-studio - easy to use- Choose format based on task:
- safetensors for fine-tuning
- gguf for inference
- Implement OpenAI API Compatibility:
- Make existing code work without rewriting
- Change endpoint from OpenAI to use our own model
- Save costs and have full control
- Structured Output (Grammar) improves response quality:
- Use xgrammar, outlines, guidance to control response format
- JSON mode for accurate function calling
- Define grammar rules for SQL, option selection, or specific formats
Excellent Response
From post-event evaluations, we received better-than-expected feedback:
💬 Some Impressive Comments
"I was impressed with the overall activity. It was great to learn from people who actually work in the field." - Participant
"The SCB10X and Float16 teams were very attentive in providing knowledge. The management was excellent. Matichon from Float16 organized the Lecture & Workshop format very well." - Participant
"I really enjoyed the event. Especially how smooth and well-streamlined the event was." - Participant
What Participants Received
🎯 Practical Knowledge
- Hands-on experience in fine-tuning and deploying LLMs
- Usage of Typhoon open-source models
- Techniques for improving LLMs for Software Development work
💰 Benefits
- 100% free of charge
- GPU Credits from Float16 worth 1,000 THB
- Digital certificate
- Free lunch
🤝 Network
Met and exchanged experiences with:
- Professors, students, researchers, and Data Scientists
- Engineers and Developers
- Startup founders and entrepreneurs
We thank everyone who participated in the event, including:
- The SCB 10X and NVIDIA teams for their full support
- DistrictX for providing the venue
- All participants who provided feedback and great ideas
Float16 is committed to being part of pushing Thailand to become a regional AI leader. This event is just the beginning of building a strong and sustainable AI practitioners community.
#AIBootcamp #LLMFinetuning #Float16 #SCB10X #NVIDIA #MachineLearning #AIThailand #Typhoon