## STRANDS > [!tip] AI > This section includes several behind the scene articles about the research and development of training machine learning models using my own datasets for various projects and experiments. 1m 36s read ### Approach 01 #ai #machinelearning #casestudies #research #development #fragments #generative Using large dataset of my own designed frames and elements to train [Low-Rank Adaptation](https://huggingface.co/docs/diffusers/training/lora) models. All training and image generation done offline on my local machine. training Low-Rank Adaptation models typically involves using existing models (pre-trained on a large dataset)  in this case I used SD 1.5 due to limitations posed by my local hardware setup. below is some examples after initial training (ai generated images) ![[9d25610f-6748-40e1-a14e-05866249caff_rw_1920.png]] ![[be4f9de2-3bfd-48e3-a745-d9e6f1d8e519_rw_1200.png]] ![[0e940163-efba-4bb3-b341-9988fe2a51c2_rw_600.png]] ![[9c9b517a-000c-4658-8b88-ea8a9d026375_rw_600.png]] ![[35c643f0-9d3d-4c4a-a315-9117c6f6d082_rw_600.png]] ![[69bb872a-e5b7-4ce2-b061-9e12bba3aaf7_rw_1920.png]] ![[9535deda-dc0a-49b6-b141-525c1ced32ef_rw_1200.png]] The fact that all training and image generation are done offline does not change the nature of the task, as the key aspect lies in the training process itself, which involves leveraging pre-existing models and adapting them to your dataset. DATASET: ![[6b700913-4c1e-415f-a16a-fa2a0a08f9a1.png]] ![[d8cc70b1-07d3-49c3-8a57-df8f8ff97086.jpg]] --- ![[8d7c47a0-c48c-4fb2-9c6f-8315e6a01bfa_rw_1200.png]] ![[453ccc24-6e3f-4668-9ddd-b599727094eb_rw_600.png]] ![[7910ab9d-6268-40d9-b88b-bd695439454a_rw_1200.png]] ![[6146353f-07fe-4daa-8ef9-6253852c7a17_rw_600.png]] ![[d925d814-23e3-49f3-8861-91e26b9391d2_rw_1200.png]] ![[e52fda94-5863-4dba-a2bc-57c0abf27239_rw_600.png]] ![[e96f01e0-a80c-4748-bf0a-cb9b914d4f81_rw_600.png]] ![[006b5801-f325-4c74-947a-0eac94d5df1d_rw_600.png]] ![[29ae4ca7-4c44-46c5-932e-938cd2178d80_rw_1200.png]] ![[285eb25e-bd23-4429-8bba-bf32713bff8c_rw_1200.png]] ![[336f5942-bd0d-42e5-8f1d-6b630c39e63c_rw_600.png]] Dataset divided into three sections to create different safetensors with varying sets and epochs. This approach allows for testing different versions to determine which one works best. ![[4b92f585-3391-443c-92f9-96a69bc61d46.png]] ### Approach 02 Using an effective and lightweight adapter to achieve image prompt capability for the pre-trained text-to-image diffusion models. While it does yield great results, achieving a balance with prompt and visuals can be challenging due to the complex setup involving various Clip-vision models, various models, and the main checkpoint model. In this scenario, I'm utilizing Stable Diffusion 1.5 "While this approach didn't align with my original goal, I achieved great results by applying this workflow to different purposes. I will be covering this aspect in the Fragments [[Fragments The Game]] boards." ![[561baa95-9947-4797-b291-c1173b5fabb2_rw_1920.png]] ![[0383a614-b11b-4bb1-9bd2-440554109ecc_rw_1920.png]] ![[a1d4abd1-3239-4822-a618-216437b787e8_rw_1920.png]]