{"id":1428,"date":"2025-12-16T09:46:54","date_gmt":"2025-12-16T09:46:54","guid":{"rendered":"https:\/\/imalogic.com\/blog\/?p=1428"},"modified":"2025-12-16T09:46:54","modified_gmt":"2025-12-16T09:46:54","slug":"integrating-ai-agents-in-the-video-creation-workflow-balancing-automation-and-creativity","status":"publish","type":"post","link":"https:\/\/imalogic.com\/blog\/2025\/12\/16\/integrating-ai-agents-in-the-video-creation-workflow-balancing-automation-and-creativity\/","title":{"rendered":"Integrating AI Agents in the Video Creation Workflow: Balancing Automation and Creativity"},"content":{"rendered":"<body>\n<h2 class=\"wp-block-heading is-style-default\"><strong>Introduction<\/strong><\/h2>\n\n\n\n<p>The use of AI in video generation is evolving fast, and we\u2019re starting to imagine workflows where intelligent agents collaborate with human creators. But how realistic is it to automate creative video production using multiple agents? Let\u2019s explore the idea and its potential.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Vision: Collaborative Creation Between Humans and AI<\/strong><\/h2>\n\n\n\n<p>In a traditional video production workflow, each phase, from the storyboard to the final edit, requires both creative and technical decisions.<br>AI agents could enhance this process by handling specific tasks, optimizing time, and allowing creators to focus on storytelling and visual direction.<\/p>\n\n\n\n<p>Here\u2019s what an agent-based video creation workflow could look like:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"810\" height=\"540\" data-attachment-id=\"1431\" data-permalink=\"https:\/\/imalogic.com\/blog\/2025\/12\/16\/integrating-ai-agents-in-the-video-creation-workflow-balancing-automation-and-creativity\/untitled\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?fit=1536%2C1024&amp;ssl=1\" data-orig-size=\"1536,1024\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Untitled\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?fit=810%2C540&amp;ssl=1\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?resize=810%2C540&#038;ssl=1\" alt=\"\" class=\"wp-image-1431\" style=\"width:658px;height:auto\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?w=1536&amp;ssl=1 1536w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/Untitled.png?resize=300%2C200&amp;ssl=1 300w\" sizes=\"auto, (max-width: 810px) 100vw, 810px\" \/><\/a><\/figure>\n<\/div>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>1\ufe0f\u20e3 Storyboard and Script Generation<\/strong><\/h2>\n\n\n\n<p>An agent could analyze a concept, theme, or prompt and automatically generate a first draft of the script or storyboard.<br>Instead of starting from scratch, creators receive structured ideas, visual references, and scene breakdowns to refine manually.<\/p>\n\n\n\n<p><strong>Example:<\/strong><br>\u201cImagine a short film about a robot and a cat exploring an abandoned city.\u201d<br>\u2192 The AI agent drafts a sequence of key moments, camera angles, and emotional beats.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2\ufe0f\u20e3 Image and Animation Generation<\/strong><\/h2>\n\n\n\n<p>Once the storyboard is ready, another agent could handle image or animation generation using diffusion models or video synthesis tools.<br>This agent could test multiple styles, produce variations, and evaluate visual coherence.<\/p>\n\n\n\n<p><strong>The human role:<\/strong> choosing the most compelling render and adjusting artistic direction.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3\ufe0f\u20e3 Post-Processing and Style Consistency<\/strong><\/h2>\n\n\n\n<p>A dedicated agent could analyze all generated scenes to ensure color, lighting, and texture consistency.<br>It could apply filters or fine-tune outputs to maintain a unified artistic signature across the video.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4\ufe0f\u20e3 Automated Editing and Sequencing<\/strong><\/h2>\n\n\n\n<p>The next logical step is an editing agent, capable of assembling generated clips according to the storyboard, synchronizing transitions, and even suggesting background music or ambient effects.<br>While full automation is still challenging, partial assistance here can save hours of manual editing.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5\ufe0f\u20e3 Feedback and Iteration Loop<\/strong><\/h2>\n\n\n\n<p>Finally, a \u201creview agent\u201d could assess the entire production based on the original concept and suggest improvements.<br>This creates a dynamic feedback loop between human and AI, pushing the final result closer to the creator\u2019s vision.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Challenges Ahead<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Context understanding:<\/strong> Each agent must interpret the creative intention and coordinate smoothly with others.<\/li>\n\n\n\n<li><strong>Artistic quality:<\/strong> Automation can accelerate production, but it still struggles to replicate nuanced human creativity.<\/li>\n\n\n\n<li><strong>Technical complexity:<\/strong> Orchestrating multiple AI systems, tools, and data flows requires robust architecture and synchronization.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>A Realistic Approach<\/strong><\/h2>\n\n\n\n<p>For now, the most efficient path is <strong>semi-automation<\/strong>:<br>letting agents assist in brainstorming, visual generation, or post-processing, while the human retains creative control.<br>This hybrid model leverages the best of both worlds : AI speed and human sensitivity.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Bridging Script and Image Generation: Managing Randomness and Style Consistency<\/strong><\/h2>\n\n\n\n<p>One of the most unpredictable stages in AI video creation lies between the <strong>script<\/strong> and the <strong>image or animation generation<\/strong>.<br>Even with carefully written prompts, the results often vary, sometimes subtly, sometimes drastically, due to the <strong>internal randomness (seed)<\/strong> of the generation models, which isn\u2019t always reproducible or exposed to the user.<\/p>\n\n\n\n<p>This randomness can lead to surprising creative outcomes\u2026 but also to inconsistencies that make it difficult to maintain a coherent visual language across an entire sequence or film.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Role of Human Sensitivity in Selecting AI-Generated \u201cRushes\u201d<\/strong><\/h3>\n\n\n\n<p>Each generated frame or animation can be seen as a \u201crush\u201d, a raw material for storytelling.<br>Selecting the right ones isn\u2019t just a matter of technical quality.<br>It requires <strong>human sensitivity<\/strong>: understanding the emotional tone, rhythm, and meaning of each shot in the context of the story.<\/p>\n\n\n\n<p>No AI currently captures the <em>intentional emotion<\/em> behind a scene the way a human creator can.<br>That\u2019s why this selection process remains a crucial <strong>human-in-the-loop<\/strong> phase, where artistic vision guides the narrative cohesion.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Adding a Reference &amp; Pre-Processing Layer<\/strong><\/h3>\n\n\n\n<p>To reduce the chaos of random outputs and keep visual coherence, it\u2019s helpful to introduce an <strong>intermediate stage<\/strong> between \u201cScript\u201d and \u201cImage Generation\u201d:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Reference &amp; Style Pre-Processing<\/strong><\/p>\n<\/blockquote>\n\n\n\n<p>In this step, the system uses:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Reference graphics or screenshots<\/strong> (e.g., from 3D renders or moodboards)<\/li>\n\n\n\n<li><strong>Multiple perspective samples<\/strong> of the same scene to enforce stylistic consistency<\/li>\n\n\n\n<li>A <strong>pre-processing module<\/strong> that harmonizes visual parameters (lighting, color palette, framing cues) before generation<\/li>\n<\/ul>\n\n\n\n<p>This ensures that, even if each image is generated independently, they all share the same visual DNA, helping maintain continuity across the video.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Updated Workflow<\/strong><\/h3>\n\n\n\n<p>You can represent this refinement in the workflow as:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"810\" height=\"1215\" data-attachment-id=\"1433\" data-permalink=\"https:\/\/imalogic.com\/blog\/2025\/12\/16\/integrating-ai-agents-in-the-video-creation-workflow-balancing-automation-and-creativity\/chatgpt-image-9-nov-2025-15_54_14\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?fit=1024%2C1536&amp;ssl=1\" data-orig-size=\"1024,1536\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"ChatGPT Image 9 nov. 2025, 15_54_14\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?fit=683%2C1024&amp;ssl=1\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?resize=810%2C1215&#038;ssl=1\" alt=\"\" class=\"wp-image-1433\" style=\"width:409px;height:auto\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?w=1024&amp;ssl=1 1024w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/ChatGPT-Image-9-nov.-2025-15_54_14.png?resize=200%2C300&amp;ssl=1 200w\" sizes=\"auto, (max-width: 810px) 100vw, 810px\" \/><\/a><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>Practical Examples: The \u039bIVORYA Experiments<\/strong><\/h2>\n\n\n\n<p>To better illustrate how this workflow can be applied in real-world creative projects, here are four experimental videos I created under the <em>\u039bIVORYA<\/em> series.<br>Each one explores a different aspect of AI-assisted storytelling, combining emotional intention, stylistic exploration, and iterative refinement.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>\u039bIVORYA | \u201cTristesse\u201d \u2013 Zaho De Sagazan (Fan Art IA)<\/strong><\/h3>\n\n\n\n<p>My <strong>first AI-driven video experiment<\/strong>, combining <em>lipsync AI<\/em> and <em>reference image-based animation<\/em>.<br>The main challenge was achieving <strong>realistic tear animation<\/strong> and a <strong>comic-inspired visual effect<\/strong> that still carried emotional weight.<br>It involved extensive <em>trial and error<\/em> to balance realism with expressive stylization.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"810\" height=\"456\" src=\"https:\/\/www.youtube.com\/embed\/NrucbKEgHTM?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>\u039bIVORYA | \u201cSans Rancune \u2013 Becoming Light\u201d<\/strong><\/h3>\n\n\n\n<p>This second project used a <strong>single, carefully selected reference image<\/strong> as the starting point.<br>The goal was to explore <strong>different emotional tones<\/strong> and <strong>generate expressive rushes<\/strong> through highly detailed prompts.<br>The focus was on <strong>emotional resonance<\/strong>, how variations in AI interpretation could reflect subtle mood shifts in light, expression, and motion.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"810\" height=\"456\" src=\"https:\/\/www.youtube.com\/embed\/OWJQjPuWg3M?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>\u039bIVORYA | \u201cLe C\u0153ur du Temps\u201d<\/strong><\/h3>\n\n\n\n<p>Here, I worked mainly with <strong>reference images for backgrounds<\/strong>, using them as stylistic anchors for each scene.<br>One of the biggest challenges was maintaining <strong>robot design coherence<\/strong> between sequences, especially across multiple AI generations.<br>This led me to develop and test <strong>a pre-processing approach<\/strong> for visual alignment : an essential step for future projects.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"810\" height=\"456\" src=\"https:\/\/www.youtube.com\/embed\/BvSTSYvRnOI?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>\u039bIVORYA | \u201cThe Last Link\u201d<\/strong><\/h3>\n\n\n\n<p>This fourth experiment explored an <strong>anime\/manga visual direction<\/strong>, with highly specific prompts to generate and animate complex sequences, such as the <em>dome destruction<\/em> and <em>\u201ctree magic\u201d scene in the forest<\/em>.<br>After this project, I realized the importance of integrating <strong>consistent reference imagery<\/strong> and a <strong>style normalization stage<\/strong> before generation to maintain narrative and visual unity.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" class=\"youtube-player\" width=\"810\" height=\"456\" src=\"https:\/\/www.youtube.com\/embed\/gZwkOmJFy3g?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent\" allowfullscreen=\"true\" style=\"border:0;\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox\"><\/iframe><\/span>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Reflections on the Process<\/strong><\/h3>\n\n\n\n<p>Through these projects, I learned that <strong>AI video creation is more an iterative dialogue than a deterministic process<\/strong>.<br>Each generated rush (whether successful or not ) helps refine the story\u2019s emotion and coherence.<br>AI offers endless creative variation, but <strong>emotional authenticity still depends on human sensitivity<\/strong>: the way we select, sequence, and emotionally interpret each fragment.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>Creating a video entirely through AI agents is still a complex challenge. Yet each experiment teaches us how machines can extend our creative reach.<br>When emotion guides the process, and technology follows, imagination finds new forms of expression.<\/p>\n\n\n\n<p>That\u2019s the philosophy behind <strong>Imalogic<\/strong> : <em>logic serving imagination.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n\n\n\n<p><em>Written by David Lovera, exploring the intersection of AI, creativity, and video production.<\/em><\/p>\n<\/body>","protected":false},"excerpt":{"rendered":"<p>Introduction The use of AI in video generation is evolving fast, and we\u2019re starting to imagine workflows where intelligent agents<\/p>\n","protected":false},"author":1,"featured_media":1436,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[134,172,133,66,6,171],"tags":[174,189,186,190,183,173,191,187,188,192],"class_list":["post-1428","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-a-i","category-art","category-artificial-intelligence","category-computer-graphics","category-signal-processing","category-video","tag-ai-animation","tag-ai-art","tag-ai-video","tag-creative-tech","tag-digital-art","tag-generative-video","tag-imalogic","tag-storytelling","tag-video-workflow","tag-ivorya"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2025\/11\/grok-video-3f681385-8718-4e55-9a0f-e83c6de178e6.mp4_snapshot_00.06.041.jpg?fit=832%2C1504&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p8J21V-n2","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/1428","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/comments?post=1428"}],"version-history":[{"count":5,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/1428\/revisions"}],"predecessor-version":[{"id":1449,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/1428\/revisions\/1449"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/media\/1436"}],"wp:attachment":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/media?parent=1428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/categories?post=1428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/tags?post=1428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}