{"id":53688,"date":"2025-08-22T15:13:13","date_gmt":"2025-08-22T05:13:13","guid":{"rendered":"https:\/\/www.cloudproinc.com.au\/?p=53688"},"modified":"2025-08-22T15:25:16","modified_gmt":"2025-08-22T05:25:16","slug":"what-is-supervised-fine-tuning-sft","status":"publish","type":"post","link":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","title":{"rendered":"What is Supervised Fine-Tuning (SFT)"},"content":{"rendered":"\n<p>in this blog post <em>&#8220;What is Supervised Fine-Tuning (SFT)&#8221;<\/em> we will unpack what supervised fine-tuning is, when it\u2019s the right tool, how it works under the hood, and how to run a robust SFT project end-to-end\u2014from data to deployment.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-supervised-fine-tuning-sft\">What is Supervised Fine-Tuning (SFT)?<\/h2>\n\n\n\n<p>Supervised fine-tuning adapts a pretrained <a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/category\/llm\/\">language model<\/a> to perform better on a target behavior by training on paired inputs and outputs. The model learns via next-token prediction with teacher forcing, typically optimizing cross-entropy loss on the target response tokens. In practice, SFT is used to align models with desired formats (e.g., helpful answers, safe completions, tool-use schemas) and domains (e.g., support, legal, medical, coding) without retraining from scratch.<\/p>\n\n\n\n<p>Key characteristics:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data: input-output pairs (e.g., instruction \u2192 answer). Often called instruction tuning or task-specific SFT.<\/li>\n\n\n\n<li>Loss: next-token cross-entropy; commonly mask loss on prompt tokens and compute loss only on the response.<\/li>\n\n\n\n<li>Goal: improve adherence to instructions, factuality in a domain, stylistic consistency, and output structure.<\/li>\n\n\n\n<li>Scope: from small task adapters (parameter-efficient finetuning) to full-model updates.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-when-and-when-not-to-use-sft\">When (and When Not) to Use SFT<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-good-use-cases\">Good use cases<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Consistent formatting: APIs requiring JSON, function-call arguments, or specific templates.<\/li>\n\n\n\n<li>Domain adaptation: customer support, documentation, financial or legal drafting, coding conventions.<\/li>\n\n\n\n<li>Instruction following: more reliable step-by-step answers vs. a base model.<\/li>\n\n\n\n<li>Latent knowledge activation: making better use of pretrained knowledge with domain exemplars.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-consider-other-approaches-if\">Consider other approaches if<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You need preference optimization across multiple acceptable outputs: consider RLHF\/DPO after SFT.<\/li>\n\n\n\n<li>You need tool integration without training: try prompting or structured output constraints first.<\/li>\n\n\n\n<li>You only need light behavior change: try prompt engineering or system prompts before SFT.<\/li>\n\n\n\n<li>Your data is scarce or noisy: risk of overfitting or regressions; invest in data quality first.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-sft-works-under-the-hood\">How SFT Works Under the Hood<\/h2>\n\n\n\n<p>The model is fed a concatenation of the prompt and the target response. During training, the labels are the next tokens of the sequence. To prevent the model from \u201clearning\u201d to reproduce the prompt, a loss mask is applied so only response tokens contribute to the loss. This preserves instruction-following while reinforcing the desired answers, style, and structure.<\/p>\n\n\n\n<p>Modern chat models also use conversation templates (system, user, assistant roles). It\u2019s critical to format SFT data with the exact chat template expected by the tokenizer and model, including special tokens. Misalignment here often causes degraded performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-most-important-variable\">The Most Important Variable<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-data-types\">Data types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Human-authored instruction\u2013response pairs: highest quality, costly to scale.<\/li>\n\n\n\n<li>Human-edited synthetic data: model-generated drafts reviewed\/edited by experts; good cost-quality balance.<\/li>\n\n\n\n<li>Pure synthetic data: useful for coverage; requires heavy filtering and held-out evaluation to avoid bias.<\/li>\n\n\n\n<li>Logs and transcripts: mine real-world prompts and outcomes, with careful anonymization and curation.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-quality-checklist-data\">Quality checklist (data)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Coverage: reflect the distribution of prompts you expect in production.<\/li>\n\n\n\n<li>Diversity: vary phrasing, difficulty, length, and edge cases.<\/li>\n\n\n\n<li>Correctness: verify factuality and adherence to policies.<\/li>\n\n\n\n<li>Consistency: stable formats, with explicit acceptance criteria.<\/li>\n\n\n\n<li>Safety: remove harmful content or annotate with policy-compliant alternatives.<\/li>\n\n\n\n<li>Deduplication: avoid near-duplicate prompts\/answers; reduces overfitting and memorization.<\/li>\n\n\n\n<li>Licensing and privacy: ensure rights to use; redact sensitive data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-common-formats\">Common formats<\/h3>\n\n\n\n<p>Many teams use JSONL with fields like instruction, input, and output, or chat-style role messages.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>{\"instruction\": \"Summarize the text.\", \"input\": \"&lt;article&gt;...&lt;\/article&gt;\", \"output\": \"&lt;summary&gt;...&lt;\/summary&gt;\"}\n{\"messages\": &#91;\n  {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n  {\"role\": \"user\", \"content\": \"Explain transformers in 3 bullets.\"},\n  {\"role\": \"assistant\", \"content\": \"- ...\\n- ...\\n- ...\"}\n]}\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-model-choice-and-parameter-efficient-fine-tuning\">Model Choice and Parameter-Efficient Fine-Tuning<\/h2>\n\n\n\n<p>Pick a base model that already performs reasonably on your domain and supports your context length and deployment constraints. If latency or memory is tight, smaller models or quantization-aware methods help.<\/p>\n\n\n\n<p>Parameter-efficient fine-tuning (PEFT) like LoRA\/QLoRA updates a small number of adapter parameters while freezing the base model. Benefits: lower memory, faster training, easier rollback, and composable adapters for multiple behaviors. Full fine-tuning may yield slightly higher ceilings but at higher cost and risk.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-training-recipe-illustrative\">Training Recipe (Illustrative)<\/h2>\n\n\n\n<p>The following example sketches a typical SFT setup using common open-source tooling. Adjust to your stack as needed.<\/p>\n\n\n\n<pre class=\"wp-block-code has-white-color has-black-background-color has-text-color has-background has-link-color wp-elements-10d7225aad124f1b70f6b93ff57a154a\"><code>from datasets import load_dataset\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nfrom peft import LoraConfig, get_peft_model\nfrom trl import SFTTrainer, DataCollatorForCompletionOnlyLM\n\nmodel_name = \"your-base-model\"\ndataset = load_dataset(\"json\", data_files=\"data.jsonl\")\n\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=\"auto\")\n\n# If chat model, ensure correct chat template\n# tokenizer.apply_chat_template(...) during preprocessing\n\n# Only compute loss on assistant responses\nresponse_template = \"\\nassistant:\"  # match your prompt template exactly\ncollator = DataCollatorForCompletionOnlyLM(\n    response_template=tokenizer.encode(response_template, add_special_tokens=False),\n    tokenizer=tokenizer,\n)\n\nlora_config = LoraConfig(\n    r=16, lora_alpha=32, lora_dropout=0.05, bias=\"none\", task_type=\"CAUSAL_LM\"\n)\nmodel = get_peft_model(model, lora_config)\n\ntrainer = SFTTrainer(\n    model=model,\n    tokenizer=tokenizer,\n    train_dataset=dataset&#91;\"train\"],\n    eval_dataset=dataset.get(\"validation\"),\n    max_seq_length=2048,\n    packing=True,  # pack multiple examples per sequence to improve throughput\n    data_collator=collator,\n    args=dict(\n        per_device_train_batch_size=2,\n        gradient_accumulation_steps=8,\n        learning_rate=2e-4,  # LoRA often uses higher LR than full ft\n        lr_scheduler_type=\"cosine\",\n        warmup_ratio=0.05,\n        num_train_epochs=3,\n        logging_steps=50,\n        save_steps=1000,\n        bf16=True,\n        gradient_checkpointing=True,\n    ),\n)\n\ntrainer.train()\nmodel.save_pretrained(\".\/sft-model\")\n<\/code><\/pre>\n\n\n\n<p>Notes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Match your prompt\/response templates exactly when constructing inputs and masks.<\/li>\n\n\n\n<li>Turn on gradient checkpointing to fit larger context windows.<\/li>\n\n\n\n<li>Use bfloat16 if supported; it tends to be stable and fast on modern GPUs.<\/li>\n\n\n\n<li>Pack shorter examples to reduce padding waste.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-hyperparameter-hints\">Hyperparameter Hints<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sequence length: set to your operational context size; train near the max you plan to serve.<\/li>\n\n\n\n<li>Batch size: increase effective batch size with gradient accumulation when VRAM-limited.<\/li>\n\n\n\n<li>Learning rate: 1e-5 to 5e-5 for full fine-tuning; 1e-4 to 3e-4 for LoRA are common starting points.<\/li>\n\n\n\n<li>Warmup: 3\u201310% of steps; cosine or linear schedulers both work.<\/li>\n\n\n\n<li>Early stopping: monitor validation loss and task metrics; avoid overfitting to stylistic quirks.<\/li>\n\n\n\n<li>Regularization: mix in general instruction data (e.g., 10\u201330%) to preserve breadth.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-evaluation-know-what-good-means\">Evaluation: Know What \u201cGood\u201d Means<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-automatic-metrics\">Automatic metrics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Exact match and F1 for QA with canonical answers.<\/li>\n\n\n\n<li>BLEU\/ROUGE for summarization, but beware they can miss factuality.<\/li>\n\n\n\n<li>Multiple-choice accuracy (MC1\/MC2) for knowledge checks.<\/li>\n\n\n\n<li>Pass@k or unit-test pass rate for code generation.<\/li>\n\n\n\n<li>Schema adherence: JSON parse rate, field presence, JSON schema validation.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-preference-and-human-evaluation\">Preference and human evaluation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pairwise win-rate vs. baseline on representative prompts.<\/li>\n\n\n\n<li>Rubric-based scoring: helpfulness, harmlessness, faithfulness, formatting.<\/li>\n\n\n\n<li>Red-teaming: prompt families targeting safety and robustness.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-test-design\">Test design<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Holdout split: no leakage from train to eval. Deduplicate at the prompt and n-gram levels.<\/li>\n\n\n\n<li>Stratification: include lengths, difficulty, and edge cases proportional to production.<\/li>\n\n\n\n<li>Statistical confidence: use multiple seeds; report confidence intervals where feasible.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-safety-policy-and-compliance\">Safety, Policy, and Compliance<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Policy conditioning: include system messages describing rules; reinforce with examples.<\/li>\n\n\n\n<li>Refusals and deflections: include exemplars of safe alternatives for disallowed content.<\/li>\n\n\n\n<li>PII handling: redact training data; test for unintended memorization with targeted prompts.<\/li>\n\n\n\n<li>Licensing: confirm rights to use data; track provenance and opt-out lists.<\/li>\n\n\n\n<li>Guardrails: combine SFT with runtime filters or classifiers for high-risk domains.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-from-lab-to-production\">From Lab to Production<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-packaging-and-deployment\">Packaging and deployment<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Adapters: with PEFT, ship only adapter weights; keep base model immutable for reuse.<\/li>\n\n\n\n<li>Quantization: 4\/8-bit inference for cost; validate accuracy and latency impacts.<\/li>\n\n\n\n<li>Prompt contracts: version your system prompts and templates alongside the model.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-monitoring\">Monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Quality KPIs: win-rate vs. baseline, schema-parse rate, task success, latency, cost.<\/li>\n\n\n\n<li>Safety KPIs: flagged content rate, false-positive\/negative rates in moderation.<\/li>\n\n\n\n<li>Drift: track prompt distribution changes and performance by segment over time.<\/li>\n\n\n\n<li>Feedback loops: collect user ratings and flagged cases to fuel continuous improvement.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-iteration\">Iteration<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data engine: prioritize new training examples from observed failure modes.<\/li>\n\n\n\n<li>Curriculum: stage training with general instructions, then domain, then format-heavy data.<\/li>\n\n\n\n<li>Preference optimization: consider DPO or RLHF after SFT for finer control of trade-offs.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-common-pitfalls-and-how-to-avoid-them\">Common Pitfalls and How to Avoid Them<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Mismatched templates: ensure the exact same chat\/prompt template is used at train and inference.<\/li>\n\n\n\n<li>Loss on prompts: mask non-response tokens; otherwise the model learns to echo inputs.<\/li>\n\n\n\n<li>Over-narrow data: mixing a small % of broad instructions preserves general capabilities.<\/li>\n\n\n\n<li>Data leakage: deduplicate across train\/dev\/test; watch for copy-paste contamination.<\/li>\n\n\n\n<li>Unstable training: too high LR, too long sequences without checkpointing, or no warmup.<\/li>\n\n\n\n<li>Hallucinations from synthetic data: add human verification for high-stakes tasks.<\/li>\n\n\n\n<li>Evaluation mismatch: do not rely on a single metric; triangulate with human eval.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-minimal-end-to-end-checklist\">A Minimal End-to-End Checklist<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Define target behaviors and acceptance criteria.<\/li>\n\n\n\n<li>Assemble and clean instruction\u2013response data; deduplicate and redact.<\/li>\n\n\n\n<li>Choose base model and context window; decide PEFT vs. full FT.<\/li>\n\n\n\n<li>Implement exact prompt\/response templates and loss masking.<\/li>\n\n\n\n<li>Train with conservative hyperparameters; log and checkpoint frequently.<\/li>\n\n\n\n<li>Evaluate on varied, held-out tests; include human preference checks.<\/li>\n\n\n\n<li>Harden safety with policy examples and runtime guardrails.<\/li>\n\n\n\n<li>Package adapters, version prompts, and deploy with monitoring.<\/li>\n\n\n\n<li>Collect feedback; iterate data and, if needed, add preference optimization.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-faq\">FAQ<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-much-data-do-i-need\">How much data do I need?<\/h3>\n\n\n\n<p>It depends on the gap between the base model and your target. Hundreds to a few thousand high-quality examples can materially improve formatting and instruction-following. Domain depth and style often benefit from tens of thousands of curated pairs. Quality beats quantity.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-what-hardware-is-required\">What hardware is required?<\/h3>\n\n\n\n<p>For PEFT on 7B\u201313B models with 2k\u20134k context, a single modern GPU (e.g., 24\u201380 GB VRAM) can suffice with gradient accumulation and checkpointing. Larger models or longer contexts require multi-GPU setups. Validate throughput and memory early with a small sample.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-will-sft-reduce-general-capabilities\">Will SFT reduce general capabilities?<\/h3>\n\n\n\n<p>It can, if training is narrow. Mix in general instruction data and monitor broad benchmarks to mitigate regressions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-does-sft-differ-from-rlhf-dpo\">How does SFT differ from RLHF\/DPO?<\/h3>\n\n\n\n<p>SFT learns from labeled targets; RLHF\/DPO learn from preferences between outputs. Many production systems use SFT first for instruction adherence and format, then apply preference optimization to fine-tune trade-offs like verbosity and tone.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-takeaways\">Takeaways<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SFT is the most accessible, high-leverage method to steer LLMs toward your domain and formats.<\/li>\n\n\n\n<li>Data quality and correct templating matter more than clever hyperparameters.<\/li>\n\n\n\n<li>Evaluate with both automatic metrics and human judgment; monitor in production and iterate.<\/li>\n<\/ul>\n\n\n\n<p>With disciplined data curation, careful training, and rigorous evaluation, supervised fine-tuning can turn a capable base model into a reliable system tailored to your organization\u2019s needs.<\/p>\n\n\n\n<ul class=\"wp-block-yoast-seo-related-links yoast-seo-related-links\">\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2024\/07\/15\/how-to-prevent-microsoft-365-emails-from-blacklisting\/\">How to Prevent Microsoft 365 Emails from Blacklisting<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/04\/29\/how-to-protect-your-openai-net-apps-from-prompt-injection-attacks-with-azure-ai-foundry\/\">Protect Your OpenAI .NET Apps from Prompt Injection Attacks<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2024\/09\/06\/how-to-create-an-azure-ai-language-account-using-rest-api\/\">How to Create an Azure AI Language Account Using REST API<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/04\/25\/fix-jetpack-contact-form-email-error-in-azure-wordpress-web-app\/\">Fix Jetpack Contact Form Email Error in Azure WordPress Web App<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2024\/05\/06\/controlling-ios-updates-with-intune-mdm\/\">Controlling iOS Updates with Intune MDM<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>A clear, detailed guide to supervised fine-tuning (SFT): what it is, when to use it, how to do it well, and how to evaluate, deploy, and govern SFT\u2019d models in production.<\/p>\n","protected":false},"author":1,"featured_media":53692,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"What is Supervised Fine-Tuning (SFT)","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.","_yoast_wpseo_opengraph-title":"","_yoast_wpseo_opengraph-description":"","_yoast_wpseo_twitter-title":"","_yoast_wpseo_twitter-description":"","_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[24,13,77],"tags":[],"class_list":["post-53688","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-blog","category-llm"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>What is Supervised Fine-Tuning (SFT) - CPI Consulting<\/title>\n<meta name=\"description\" content=\"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What is Supervised Fine-Tuning (SFT)\" \/>\n<meta property=\"og:description\" content=\"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/\" \/>\n<meta property=\"og:site_name\" content=\"CPI Consulting\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-22T05:13:13+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-08-22T05:25:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cloudproinc.com.au\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"CPI Staff\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"CPI Staff\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/\"},\"author\":{\"name\":\"CPI Staff\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\"},\"headline\":\"What is Supervised Fine-Tuning (SFT)\",\"datePublished\":\"2025-08-22T05:13:13+00:00\",\"dateModified\":\"2025-08-22T05:25:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/\"},\"wordCount\":1484,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/08\\\/what-is-supervised-fine-tuning-sft.png\",\"articleSection\":[\"AI\",\"Blog\",\"LLM\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/\",\"url\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/\",\"name\":\"What is Supervised Fine-Tuning (SFT) - CPI Consulting\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/08\\\/what-is-supervised-fine-tuning-sft.png\",\"datePublished\":\"2025-08-22T05:13:13+00:00\",\"dateModified\":\"2025-08-22T05:25:16+00:00\",\"description\":\"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#primaryimage\",\"url\":\"\\\/wp-content\\\/uploads\\\/2025\\\/08\\\/what-is-supervised-fine-tuning-sft.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/08\\\/what-is-supervised-fine-tuning-sft.png\",\"width\":1536,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/08\\\/22\\\/what-is-supervised-fine-tuning-sft\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What is Supervised Fine-Tuning (SFT)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#website\",\"url\":\"https:\\\/\\\/cloudproinc.com.au\\\/\",\"name\":\"Cloud Pro Inc - CPI Consulting Pty Ltd\",\"description\":\"Cloud, AI &amp; Cybersecurity Consulting | Melbourne\",\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/cloudproinc.com.au\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#organization\",\"name\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\",\"url\":\"https:\\\/\\\/cloudproinc.com.au\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"width\":500,\"height\":500,\"caption\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\"},\"image\":{\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/cloudproinc.com.au\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\",\"name\":\"CPI Staff\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"caption\":\"CPI Staff\"},\"sameAs\":[\"http:\\\/\\\/www.cloudproinc.com.au\"],\"url\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/author\\\/cpiadmin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"What is Supervised Fine-Tuning (SFT) - CPI Consulting","description":"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","og_locale":"en_US","og_type":"article","og_title":"What is Supervised Fine-Tuning (SFT)","og_description":"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.","og_url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","og_site_name":"CPI Consulting","article_published_time":"2025-08-22T05:13:13+00:00","article_modified_time":"2025-08-22T05:25:16+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/www.cloudproinc.com.au\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","type":"image\/png"}],"author":"CPI Staff","twitter_card":"summary_large_image","twitter_misc":{"Written by":"CPI Staff","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#article","isPartOf":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/"},"author":{"name":"CPI Staff","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e"},"headline":"What is Supervised Fine-Tuning (SFT)","datePublished":"2025-08-22T05:13:13+00:00","dateModified":"2025-08-22T05:25:16+00:00","mainEntityOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/"},"wordCount":1484,"commentCount":0,"publisher":{"@id":"https:\/\/cloudproinc.com.au\/#organization"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","articleSection":["AI","Blog","LLM"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","name":"What is Supervised Fine-Tuning (SFT) - CPI Consulting","isPartOf":{"@id":"https:\/\/cloudproinc.com.au\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#primaryimage"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","datePublished":"2025-08-22T05:13:13+00:00","dateModified":"2025-08-22T05:25:16+00:00","description":"Learn what Supervised Fine-Tuning (SFT) is and how it can optimize pretrained language models for specific tasks.","breadcrumb":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#primaryimage","url":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","contentUrl":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","width":1536,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cloudproinc.com.au\/"},{"@type":"ListItem","position":2,"name":"What is Supervised Fine-Tuning (SFT)"}]},{"@type":"WebSite","@id":"https:\/\/cloudproinc.com.au\/#website","url":"https:\/\/cloudproinc.com.au\/","name":"Cloud Pro Inc - CPI Consulting Pty Ltd","description":"Cloud, AI &amp; Cybersecurity Consulting | Melbourne","publisher":{"@id":"https:\/\/cloudproinc.com.au\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cloudproinc.com.au\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cloudproinc.com.au\/#organization","name":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd","url":"https:\/\/cloudproinc.com.au\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/logo\/image\/","url":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","contentUrl":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","width":500,"height":500,"caption":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd"},"image":{"@id":"https:\/\/cloudproinc.com.au\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/cloudproinc.com.au\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e","name":"CPI Staff","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","caption":"CPI Staff"},"sameAs":["http:\/\/www.cloudproinc.com.au"],"url":"https:\/\/www.cloudproinc.com.au\/index.php\/author\/cpiadmin\/"}]}},"jetpack_featured_media_url":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","jetpack-related-posts":[{"id":53863,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","url_meta":{"origin":53688,"position":0},"title":"Practical ways to fine-tune LLMs","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"A practical guide to LLM fine-tuning methods, when to use them, and how to implement LoRA and QLoRA with solid evaluation and safety steps.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 1x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 1.5x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 2x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 3x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 4x"},"classes":[]},{"id":53867,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","url_meta":{"origin":53688,"position":1},"title":"Alpaca vs Phi-3 for Fine-Tuning","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"A practical comparison of Alpaca and Microsoft Phi-3 for instruction fine-tuning, with clear guidance, code snippets, and a decision checklist for teams balancing accuracy, cost, and compliance.","rel":"","context":"In &quot;Blog&quot;","block_context":{"text":"Blog","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/blog\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 1x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 1.5x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 2x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 3x, \/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png 4x"},"classes":[]},{"id":53741,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/29\/step-back-prompting-explained-and-why-it-beats-zero-shot-for-llms\/","url_meta":{"origin":53688,"position":2},"title":"Step-back prompting explained and why it beats zero-shot for LLMs","author":"CPI Staff","date":"August 29, 2025","format":false,"excerpt":"Learn what step-back prompting is, why it outperforms zero-shot, and how to implement it with practical templates and quick evaluation methods.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 1x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 1.5x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 2x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 3x, \/wp-content\/uploads\/2025\/08\/step-back-prompting-explained-and-why-it-beats-zero-shot-for.png 4x"},"classes":[]},{"id":56966,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2026\/02\/05\/detecting-backdoors-in-open-weight-llms\/","url_meta":{"origin":53688,"position":3},"title":"Detecting Backdoors in Open-Weight LLMs","author":"CPI Staff","date":"February 5, 2026","format":false,"excerpt":"Open-weight language models can hide \u201csleeper\u201d behaviors that only appear under specific triggers. Here\u2019s a practical, team-friendly workflow to test, detect, and reduce backdoor risk before production.","rel":"","context":"In &quot;Blog&quot;","block_context":{"text":"Blog","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/blog\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2026\/02\/post-9.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2026\/02\/post-9.png 1x, \/wp-content\/uploads\/2026\/02\/post-9.png 1.5x, \/wp-content\/uploads\/2026\/02\/post-9.png 2x, \/wp-content\/uploads\/2026\/02\/post-9.png 3x, \/wp-content\/uploads\/2026\/02\/post-9.png 4x"},"classes":[]},{"id":53864,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/preparing-input-text-for-training-llms\/","url_meta":{"origin":53688,"position":4},"title":"Preparing Input Text for Training LLMs","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"Practical steps to clean, normalize, chunk, and structure text for training and fine-tuning LLMs, with clear explanations and runnable code.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1.5x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 2x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 3x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 4x"},"classes":[]},{"id":53934,"url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/25\/build-a-keras-model-for-real-projects\/","url_meta":{"origin":53688,"position":5},"title":"Build a Keras Model for Real Projects","author":"CPI Staff","date":"September 25, 2025","format":false,"excerpt":"Learn how to design, train, and deploy Keras models using TensorFlow\u2014from data prep to production-ready saves\u2014with practical code, clear steps, and tips for speed, accuracy, and maintainability.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/www.cloudproinc.com.au\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png 1x, \/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png 1.5x, \/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png 2x, \/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png 3x, \/wp-content\/uploads\/2025\/09\/build-a-keras-model-for-real-projects-from-idea-to-deployment.png 4x"},"classes":[]}],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53688","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/comments?post=53688"}],"version-history":[{"count":3,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53688\/revisions"}],"predecessor-version":[{"id":53691,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/posts\/53688\/revisions\/53691"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/media\/53692"}],"wp:attachment":[{"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/media?parent=53688"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/categories?post=53688"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cloudproinc.com.au\/index.php\/wp-json\/wp\/v2\/tags?post=53688"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}