<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Neural-Networks on Shivasurya</title><link>http://shivasurya.me/categories/neural-networks/</link><description>Recent content in Neural-Networks on Shivasurya</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 08 Aug 2025 00:00:00 +0000</lastBuildDate><atom:link href="http://shivasurya.me/categories/neural-networks/feed.xml" rel="self" type="application/rss+xml"/><item><title>Exploring fun parts of Neural Network</title><link>http://shivasurya.me/2025/08/08/neural-network/</link><pubDate>Fri, 08 Aug 2025 00:00:00 +0000</pubDate><guid>http://shivasurya.me/2025/08/08/neural-network/</guid><description>&lt;p>Back in 2017, I used to tease my friend about his machine learning work (training models, dataset operations, ML deployments) - &amp;ldquo;Come on, admit it, aren&amp;rsquo;t you just writing complex if-elif-else statements and calling yourself an ML engineer?&amp;rdquo;. While Google was &lt;a href="https://ai.google.dev/edge/litert/android">bringing ML models&lt;/a> to mobile devices using tensorflow, I remained indifferent as I couldn&amp;rsquo;t grasp the underlying mathematics or internal workings (which, honestly, continues to this day).&lt;/p>
&lt;p>&lt;img src="http://shivasurya.me/assets/media/if-else-ml.png" alt="IF-ELSE-Engineer">{:height=&amp;ldquo;400px&amp;rdquo;}&lt;/p>
&lt;p>My perspective shifted after diving deep into projects like &lt;a href="https://shivasurya.me/llm/ai/2025/04/10/lessons-from-building-sherlock-automating-security-code-reviews-with-sourcegraph.html">Sherlock&lt;/a>, &lt;a href="https://shivasurya.me/llm/ai/2025/03/19/llm-powered-security-reviews.html">LLM-Powered Security Reviews&lt;/a>, and &lt;a href="https://codepathfinder.dev/blog/introducing-secureflow-extension-to-vibe-code-securely/">SecureFlow AI&lt;/a>. After studying numerous research papers about using language models to detect code vulnerabilities, I felt compelled to return to the basics and understand the fundamental workings of neural networks, particularly how they store information in their hidden layers.&lt;/p></description></item></channel></rss>