<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Yasar Blog]]></title><description><![CDATA[Yasar Blog]]></description><link>https://y3sar.hashnode.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 23 Feb 2024 08:04:58 GMT</lastBuildDate><atom:link href="https://y3sar.hashnode.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="next" href="https://y3sar.hashnode.dev/rss.xml?page=1"/><item><title><![CDATA[The self sufficient Tiktok]]></title><description><![CDATA[<p>In an episode called "Joan is Awful" from the new season of Black Mirror, a world is depicted where a streaming service called StreamBerry produces deepfaked, synthetically generated TV shows centered around the lives of its users. This intriguing portrayal offers a glimpse into a possible dystopian future, one that closely aligns with the direction we are potentially heading, where companies prioritize user engagement over their well-being, mental health and privacy. This got me thinking, how far are we from a platform that generates content on its own and glues the audience to the screen even more so than now?</p><h3 id="heading-how-it-might-look-like">How it might look like</h3><p>A screen filled with content that was generated only for you. You keep scrolling through the never-ending flow of captivating videos and images. Every now and then you stop for a few seconds to look at a deepfaked panda hugging a tree and then start scrolling again, that will be the positive reinforcement for the content-generating AI to generate more pandas or trees or maybe hugging. Slowly through your interactions with the app, it gets to know what you like, know what you dislike, it gets to know you.</p><p>These methods are already deployed somewhat in the current social media platforms. Platforms already learn user preferences through their interactions with the app. But what will be new is that the platforms will not rely on humans to generate content for them anymore. When the platform can generate content by itself, it becomes self-sufficient. Diffusion models like Stable diffusion and Midjourney can already generate images in a few seconds now. Soon these models can generate images and even videos in under a second. That breakthrough will enable these platforms to generate content on the fly. With each scroll new freshly generated content for your eyes and ears, tailored just for you.</p><h3 id="heading-how-it-will-work">How it will work</h3><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688653258308/e2fc8f08-73ed-4763-9682-e5eb33f79b68.png" alt class="image--center mx-auto" /></p><p>The thing about these diffusion models is that even though they can generate incredible images and videos, the prompt always comes from humans. So human initiates the process of generation. For these self-sufficient content generation platforms the prompt generation must be automated. The prompt must also come from an AI, a language model, the ones you interact with when you talk to ChatGPT or Bing. Given the user preference data and various types of metadata, a language model will take it as input and will generate a prompt or maybe a series of prompts that will then be used by the diffusion model to generate the contents for the For You page. Let's call this language model the Prompt model. This prompt model can also be used as a reinforcement learning agent which will try to generate prompts to maximize its reward aka user engagement. Maybe even using collaborative filtering these prompts, generated for one user can be propagated to other people's For You page with similar tastes.</p><p>Today recommendation algorithms ask the question, what content or content creator can increase the engagement of a certain user? Soon the question will be, what prompts will generate the content that increases the engagement of a certain user?</p><h3 id="heading-content-creators-vs-prompt-creators">Content creators vs Prompt creators</h3><p>Not all the prompts will be generated by the platform's AI of course. Users can also create their prompts and generate content of their own. Maybe the generation services will be different from the platforms it is posted in but either way, there will be competition between these groups on who can grab more attention from the users. The scary thing about the Prompt model and the whole system is that it will never get discouraged. When your street food vlogging page doesn't get too many views or likes you might think it's not worth it, you might stop, or your ego might get hurt by critical comments or maybe the lack of comments. Machines don't have ego, all those negative comments will be used as fuel to generate even better content. One other thing is that whatever the human creators create, it's all going to be used to train the diffusion models anyway. Unless there are some laws put in place to restrict the platforms from using user content as training data.</p><h3 id="heading-the-value-of-human-creators">The value of human creators</h3><p>If this ever becomes a reality I like to think that the value of human creators will increase. In the sea of never-ending content, authenticity will become more important than surface-level beauty. The fact that another fellow human has created this content will be of significant value to users. Just like a handmade vase is more valuable than a vase made in a factory.</p><p>Or maybe the opposite will be true. Maybe the fact that a particular video regardless of who or what made it, was made only for you will be of value. Maybe exclusivity will be more important than originality and authenticity. That will be a sad reality one I don't want to see realize.</p><h3 id="heading-how-far-is-it">How far is it</h3><p>We have seen massive strides in the space of diffusion models with Stable Diffusion and Midjourney creating breathtaking images from a few lines of text. But art can only keep a user engaged for so long. What we consume most on social media are memes or photographs taken from real-world events. Photographs cannot be replicated by AI as they will lose their authenticity. Memes however can be generated with these models but the humor must come from the Prompt model as well as the text that goes in that image. Memes are, most of the time, based on recent events so the Prompt model must keep up to date with what is happening in the world which is a hard task in and of itself. Moreover, embedding text in the generated image has been very difficult for diffusion models although some improvements have been made by architectures like <a target="_blank" href="https://github.com/deep-floyd/IF">DeepFloyd</a> that solve the problem.</p><p>Video generation however is <a target="_blank" href="https://video-diffusion.github.io/">still in its infancy</a> at the time of this writing. The video generation capabilities are amazing but not lifelike as shown in the Black Mirror episode. Also, generating video will be a time-consuming task (unless models or GPUs get faster), which is unacceptable in an age where users have very little patience and cannot stand a loading screen.</p><p>So it seems like a social media platform like this is quite far away but not at all impossible to create.</p>]]></description><link>https://y3sar.hashnode.dev/the-self-sufficient-tiktok</link><guid isPermaLink="true">https://y3sar.hashnode.dev/the-self-sufficient-tiktok</guid><dc:creator><![CDATA[Samin Yasar]]></dc:creator><pubDate>Tue, 04 Jul 2023 10:54:13 GMT</pubDate><cover_image>https://cdn.hashnode.com/res/hashnode/image/upload/v1688467991871/7605cc06-4b52-4fbf-8f38-caa9d5e64914.jpeg</cover_image></item><item><title><![CDATA[A simple explanation of Bias and Variance]]></title><description><![CDATA[<p>I am a fan of simple and concise explanations. When it comes to Deep Learning and Machine Learning it is often very useful to understand from a higher level what a concept really is, before diving in deeper. This post will help you have a higher level understanding of 2 very important concepts Bias and Variance.</p><p>If a model's inference changes drastically on unfamiliar data points it means the model varies too much between data points. We want our models prediction to be correct across multiple datasets even if the underlying properties of the dataset are different. What we basically want is a model that can <strong><em>generalize</em></strong> across all the data on the planet. Which means we want <strong><em>low variance</em></strong> which means the model predictions will not <strong><em>vary</em></strong> drastically to unfamiliar datasets. If it does then the model has memorized the specifics of the dataset it was trained on. This is called <strong><em>high variance</em></strong>. </p><p>On the other hand the model can perform badly on a data set. This would mean the model has <strong>high bias</strong> which means it makes incorrect predictions and is learning a simple model which is not enough for the complexity of the dataset. <strong><em>High bias happens because the model is oversimplified and is not sufficient for the given task.</em></strong></p><p><strong>High Bias = Underfitting = Model is too simple</strong></p><p><strong>High Variance = Overfitting = Model is too complex</strong></p><h3 id="heading-high-bias-solutions">High bias solutions</h3><ol><li>Bigger network more complex network</li><li>Train for longer</li></ol><h3 id="heading-high-variance-solutions">High variance solutions</h3><ol><li>Get more training data</li><li>Regularization</li></ol><p>The bias variance tradeoff is not a big problem in Deep Learning because there are tools to drive down just the bias and just the variance without hurting the other one. </p><p>Training bigger network will drive down bias without effecting the variance too much</p><p>Getting more data will drive down variance without effecting bias too much.</p>]]></description><link>https://y3sar.hashnode.dev/a-simple-explanation-of-bias-and-variance</link><guid isPermaLink="true">https://y3sar.hashnode.dev/a-simple-explanation-of-bias-and-variance</guid><dc:creator><![CDATA[Samin Yasar]]></dc:creator><pubDate>Sun, 24 Jul 2022 04:09:31 GMT</pubDate><cover_image>https://cdn.hashnode.com/res/hashnode/image/unsplash/YRvT2kjVRvg/upload/v1658635615216/a2kKh-J91.jpeg</cover_image></item><item><title><![CDATA[Startups Start Small]]></title><description><![CDATA[<p>Almost Every successful company today started off as a startup trying to solve a very specific problem. They rarely started off by saying "We are going to revolutionize/disrupt/change <strong>[insert industry here]</strong>" Nobody can foresee such a massive change that far ahead in the future. What great founders and startups do is build a solution for a specific group of people. A group of people whose pains the founders know and understand on a deeper level.</p><h3 id="heading-apple">Apple</h3><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658156461963/kOYK0mp-E.jpg" alt="CopsonApple1_2k_cropped.jpg" /></p><p>Apple's founding story is a great example of S.S.S. Both Steve's <strong>were not</strong> trying to build a personal computer that is going to be on everyone's desk. Wozniak built a beautiful computer board (later to be called Apple 1) which he showed off every 2 weeks at the Homebrew computer club. He used to tell people that they can build their own computers just like him using the board schematics he provided for free. But no one in that club was good at soldering, they were all software people. That's when Jobs <a target="_blank" href="http://www.foundersatwork.com/steve-wozniak.html">said</a></p><blockquote><p>"Look, there are a lot of people that want to build it and they can get the chips, but they don't want to solder it all together. So why don't we make a PC board and they can plop their chips in the PC board"soldering a printed circuit board is easy, there are no wires" and then they've got it done."</p></blockquote><p>And that was it. That's how the world's most innovative company got started. Building a beautiful PC board for some enthusiastic people from a small computer club.</p><h3 id="heading-youtube">YouTube</h3><p><img src="https://www.feedough.com/wp-content/uploads/2017/08/Youtube-History-dating.png" alt="youtube's early days" /></p><p>YouTube started out as a dating site. It was a site where single people could upload videos of themselves and hook up with other users. They were not trying to be the biggest video streaming platform in the world. They were not even trying to be a video platform until they realized that <strong>users were just uploading videos they wanted to store or show off online</strong>. That is when the founders decided to be a video hosting platform. This is a great example of paying attention to users needs and changing the product to better meet their needs.</p><h3 id="heading-instagram">Instagram</h3><p><img src="https://pg-designs.ca/wp-content/uploads/2020/02/burbn.jpg" alt="CopsonApple1_2k_cropped.jpg" /></p><p>Instagram started off as an iPhone app called Burbn. A location-based social media app where users can check in from different locations and <strong>upload photos of their experience</strong>. Later founder Kevin Systrom and programmer Mile Krieger noticed that users were not using the check-in feature at all. Instead, they were using the photo-sharing feature A LOT. So the duo tweaked the app to help users upload a photo in only three clicks. The rest is history.</p><h3 id="heading-twitch">Twitch</h3><p><img src="https://www.cnet.com/a/img/resize/5b5de6a53806a7487b33b7178db43cd0be6677c9/2009/04/26/8c785bf9-f0ff-11e2-8c7c-d4ae52e62bcc/photojustin.png?auto=webp&amp;fit=crop&amp;height=675&amp;width=1200" alt="youtube's early days" /></p><p>The video live-streaming giant Twitch is currently the most popular video live-streaming platform among gamers and other creators. But before Twitch became well... Twitch it was called Justin.tv. The site used to only live stream founder Justin Kan's life. Streaming Justin's entire life with a camera attached to his hat and also a chat system where viewers could interact with Justin. Probably the best example of a startup starting small.</p><h3 id="heading-outro">Outro</h3><p>This hopefully will inspire the new aspiring startup founders when they look at their crappy MVP and doubt themselves. All successful startups were at this stage so we shouldn't get disheartened if the first or even second or third version of the product is bad. The key is to pay attention to the users and mold the product accordingly.</p><p>Inspirations for this article</p><ol><li><p><a target="_blank" href="http://www.paulgraham.com/organic.html">Organic Startup Ideas-Paul Graham</a></p></li><li><p><a target="_blank" href="https://www.youtube.com/watch?v=G7TMqY7gkGY">Simple products that became big companies</a></p></li><li><p><a target="_blank" href="http://www.foundersatwork.com/steve-wozniak.html">Steve Wozniak interview</a></p></li></ol>]]></description><link>https://y3sar.hashnode.dev/startups-start-small</link><guid isPermaLink="true">https://y3sar.hashnode.dev/startups-start-small</guid><dc:creator><![CDATA[Samin Yasar]]></dc:creator><pubDate>Wed, 20 Jul 2022 02:36:55 GMT</pubDate><cover_image>https://cdn.hashnode.com/res/hashnode/image/upload/v1658272970117/8rKgkQMTu.png</cover_image></item><item><title><![CDATA[AI Perception vs Human Perception]]></title><description><![CDATA[<p>I was recently listening to Lex Fridmans <a target="_blank" href="https://www.youtube.com/c/lexfridman">AI podcast</a>. Lex asked Elon Musk what the harder problem in the field of Self Driving Car is, The Perception or The Control. So making AI understand what it is experiencing in the world or making calculated judgements based on those perceptions?</p><p>Elon's reply</p><blockquote><p>The hardest thing is having accurate representation of real world objects in vector space</p></blockquote><p>That got me thinking, can we ever accurately represent our real world with vectors and matrices? In other words can we reliably transmit all there is about our world and condense it into a bunch of structured numbers essentially. Well recent developments in Computer Vision will make you say of course we can. Neural Networks can reliably predict what it is seeing in an image, <a target="_blank" href="https://news.stanford.edu/2017/11/15/algorithm-outperforms-radiologists-diagnosing-pneumonia/">sometimes even better than humans.</a> So that means the perception problem must be solved right? We have figured out a way to condense the real world information into numbers? Not Quite. Yes, computer scientists and researchers have figured out ways to represent a lot of the real world information into numbers, lets say we are looking at a 3d matrix that represents an apple and if we understand everything about that matrix we will have a pretty good structural idea about what an apple is. But looking at the apple itself adds more knowledge, or does it?</p><h2 id="heading-the-knowledge-argument">The Knowledge Argument</h2><p>Time for some philosophy. There is a great thought experiment called the <a target="_blank" href="https://www.youtube.com/watch?v=mGYmiQkah4o">Marys Room thought experiment</a> that kind of asks the questions whether or not conscious experience involves non-physical properties. It suggests that even if someone has complete theoretical knowledge about a certain thing, more knowledge can be acquired by <strong>experiencing </strong>that thing. Mary knows everything there is to know about colors. She knows every single property of color everything from the different pigments to the wave forms that it creates to even what kind of neurons light up when humans look at color. But, and here is the catch, Mary is color blind. She never in her entire life perceived color but knows everything about it. So now the question is will she learn anything new about color if somehow she could see color again? Will that experience hold any new information that was not captured by the theories.</p><p>In our world AI is Mary. We have condensed everything about perceiving a real world thing into a matrix form for the AI to understand. But without <strong>experiencing</strong> the thing can it ever really understand? Lets forget about the philosophical aspect for a second. We perceive the world we live in, on the other hand Neural networks are just mathematical functions living inside of computer circuits. In order to truly understand what it is to perceive something in reality AI needs to understand the higher level concepts of the objects that it perceives, how the different objects interact, what their relationships are. This is what Yoshua Bengio, a renowned AI researcher calls <a target="_blank" href="https://www.youtube.com/watch?v=Yr1mOzC93xs">Higher Level Cognition.</a> Current neural networks are great at recognizing patterns in the matrices, but really doesnt know what those patterns mean. There is a lot to uncover in the world of AI. This is just one of many reasons to be excited about the future.</p>]]></description><link>https://y3sar.hashnode.dev/ai-perception-vs-human-perception</link><guid isPermaLink="true">https://y3sar.hashnode.dev/ai-perception-vs-human-perception</guid><dc:creator><![CDATA[Samin Yasar]]></dc:creator><pubDate>Mon, 18 Jul 2022 03:28:42 GMT</pubDate><cover_image>https://cdn.hashnode.com/res/hashnode/image/upload/v1658113894054/049a21taN.png</cover_image></item></channel></rss>