Graph representation learning/embedding is commonly the term used for the process where we transform a Graph data structure to a more structured vector form. This enables the downstream analysis by providing more manageable fixed-length vectors. Ideally, these vectors should incorporate both graph structure (topological) information apart from the node features. We use graph neural networks (CNN) to perform this transformation. To have a basic high-level idea about GNNs, you can take a peek at the following article.
In this article, I will talk about the GraphSAGE architecture which is a variant of message passing neural networks (MPNN). …
Graph neural networks (GNN) are gaining popularity due to the ubiquitous nature of the Graph Data structure. Graphs enable us to model many different problems in science in fields such as (but not limited to) biology, sociology, ecology, vision, education, economics, etc. Moreover, graph representations enable us to handle unstructured data in massive scales.
In this article, I will show how one might use a simple GNN in tasks such as classification, clustering and visualization. I will be using a GCN (Graph Convolution Network) for the running example. …
Autoencoders, in general, intend to learn a lower-dimensional representation of data. One of the main advantages of autoencoders is that they are capable of learning much complex lower dimensions whereas PCA-like decompositions are limited by their linear nature. Feel free to have a look at my article about auto-encoders.
As usual, I will be talking about an application in bioinformatics.
In contrast to conventional auto-encoders (AEs), variational auto-encoders (VAEs) belong to the family of generative models. This is because the VAEs learn a latent distribution for the input data. So they are capable of reconstructing new data points given new…
In one of my previous articles, I mentioned how and what is vectorization. In gist, the process enables us to convert variable-length nucleotide sequences into fixed-length numeric vectors.
For all the programming lovers, the link is below;
However, in this article, I will rather focus on utility and research importance. Hence, I will stick to my tool Seq2Vec for the rest of the article. You can follow the readme and install from the following source;
A memory mapped file is a segment in the virtual memory. Virtual memory is an abstraction on physical memory which is provided to the process by the operating system (OS). Should the OS run out of memory or see a process idling for long, such virtual memory segments will be “paged” on to the physical disk location (the “swap”). So “swap” is essentially the portion of virtual memory on disk.
More precisely, a memory mapped file is a mirror of a portion (or entire) file on virtual memory managed completely by the operating system.
In computer vision, object detection is an interesting field with numerous application. Like any other supervised machine learning task, we need annotation or labels for this very task as well. However, annotation of images can be a tedious time-consuming task for many of us (lazy ones?). I felt it quite strongly when I was writing my previous article, linked below.
So the big question comes;
How can we generate annotated images, automatically, with desired backgrounds?
Well, the answer is simple, use existing objects and overlay them on some backgrounds! There are several blogs on the internet that would say mostly…
A few weeks back when I was window shopping at AliExpress I came across the wonderful Maixduino device. It was claimed to carry RISC V architecture along with a KPU (KPU is a general-purpose neural network processor). Contrasting specs of the board were as follows;
Truth be told, the unit is old and has only been taking attention lately. Well, given my interest in edge computing I thought of presenting a complete end to end guide for an object detection example. This example is…
This article is motivated by my previous article on ESPHome. Previously, we built a door sensor using ESPHome library. However, using an off-the-shelf platform grants us with little to no flexibility when it comes to micro-optimizations. Having said that, I noted the best implementation consumed over 1mA and took nearly 2 seconds to respond (Even after using MQTT, ESPHome API is known to be slower with HomeAssistant).
There are several considerations that we need to think of when it comes to a critical component like a door sensor.
Home automation can be expensive and complex. However, with the arrival of IoT platforms from vendors such as Tuya, Phillips, etc have made home automation a little simpler. However, the cost still remains a little higher and the customisations are far limited. Last but not least, none of the mentioned services comes for free. You are likely being charged with your privacy.
IoT platforms provide free of charge Alexa/Google Assistant integrations. Which means you’re either charged for several years at purchase or your data is more profitable for vendors.
Both HomeAssistant and ESPHome are Free and Open-source
Enough with motivation…
Are you bored with your ordinary TV? Do you want to expand your media experience to a whole new level? Yet you are limited with the budget? Like me? Well, this would be the perfect Smart TV solution!
Disclaimer: This tutorial might not be complete and can contain differences. Google Cloud Console is continuously changing. However, the underlying configuration should be quite the same.
For this article, we will be installing OSMC as the media operating system. Let’s get started!