I manage the entire blog all by myself. I intend to post biweekly, but if I'm busy with university or something then I'll delay this delivery by a week.

Welcome to the 2nd issue of machine learning issues. I worked tirelessly throughout the week to find you the best of the best research in machine learning and the field of AI.

If you like the kind of articles I publish and follow my weekly machine learning issues subscribe to my newsletter.

Now let's get started with cool researches for this month.

Read this here

In machine learning, linear combinations of losses are all over the place. In fact, they are commonly used as the standard approach, despite that they are a perilous area full of dicey pitfalls. Especially regarding how these linear combinations make your algorithm hard to tune.

Read the research here

As machine learning algorithms have been widely deployed across applications, many concerns have been raised over the fairness of their predictions, especially in high-stakes settings (such as facial recognition and medical imaging). To respond to these concerns, the community has proposed and formalized various notions of fairness as well as methods for rectifying unfair behavior. While fairness constraints have been studied extensively for classical models, the effectiveness of methods for imposing fairness on deep neural networks is unclear. In this paper, we observe that these large models overfit fairness objectives, and produce a range of unintended and undesirable consequences. We conduct our experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.

Read the research paper here

Graph neural networks (GNNs) are a powerful architecture for tackling graph learning tasks, yet have been shown to be oblivious to eminent substructures, such as cycles. We present TOGL, a novel layer that incorporates global topological information of a graph using persistent homology. TOGL can be easily integrated into any type of GNN and is strictly more expressive in terms of the Weisfeiler–Lehman test of isomorphism. Augmenting GNNs with our layer leads to beneficial predictive performance, both on synthetic data sets, which can be trivially classified by humans but not by ordinary GNNs, and on real-world data.

I am theroyakash, I develop software and machine learning and deep learning models. I am currently in the process of writing a book for understanding fundamental machine learning algorithms from scratch, I assume the reader has no knowledge of machine learning or just trying to start. This book will offer a variety of machine learning algorithms from scratch implementations and the mathematics behind them. The book is in the early stages and will be distributed for free over the internet. You can get early access to this put just filling up this contact form here.

Read about this here

MIT researchers’ new hardware and software system streamlines state-of-the-art sentence analysis.

See the research paper here

Being able to learn dense semantic representations of images without supervision is an important problem in computer vision. However, despite its significance, this problem remains rather unexplored, with a few exceptions that considered unsupervised semantic segmentation on small-scale datasets with a narrow visual domain. In this paper, we make a first attempt to tackle the problem on datasets that have been traditionally utilized for the supervised case. To achieve this, we introduce a novel two-step framework that adopts a predetermined prior in a contrastive optimization objective to learn pixel embeddings. This marks a large deviation from existing works that relied on proxy tasks or end-to-end clustering. Additionally, we argue about the importance of having a prior that contains information about objects, or their parts, and discuss several possibilities to obtain such a prior in an unsupervised manner.

Extensive experimental evaluation shows that the proposed method comes with key advantages over existing works. First, the learned pixel embeddings can be directly clustered in semantic groups using K-Means. Second, the method can serve as an effective unsupervised pre-training for the semantic segmentation task. In particular, when fine-tuning the learned representations using just 1% of labeled examples on PASCAL, we outperform supervised ImageNet pre-training by 7.1% mIoU.

See the github repo here

This package provides easy-to-use, state-of-the-art machine translation for more than 100+ languages. The highlights of this package are:

- Easy installation and usage: Use state-of-the-art machine translation with 3 lines of code
- Automatic download of pre-trained machine translation models
- Translation between 150+ languages
- Automatic language detection for 170+ languages
- Sentence and document translation
- Multi-GPU and multi-process translation

You can get started with this

```
pip install -U easynmt
```

Read about the research here

Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achieve targeted properties and the geometry is parameterized by a density function. This optimization is challenging because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method – physics-informed neural networks with hard constraints (hPINNs) – for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods.

If you like the kind of articles I publish and follow my weekly machine learning issues subscribe to my newsletter here

]]>If you want to listen to this article as audio while you read you can do it here

If you want your blog articles as audio and make them available at the top of your post you can follow my steps. Currently, Hashnode has no support for audio embedding but we'll make it work. If you host your blogging in WordPress or Ghost you can embed an HTML5 Audio player directly into a post. Let's first generate all the audios.

Last week I came across a platform called Send As A Podcast. It creates an audio version of the podcast and stores it in an amazon s3 bucket. Now once you've set up your audiblogs account, you need to do these steps to add the audio to your blog post.

- First create a public link for the draft of the blog. Here's how to do this
- Then you need to open the link and send this as a podcast via the audiblogs chrome extension.
- Remember when you set up the audiblogs platform for your chrome you got a link that looks like this
`https://rebrand.ly/......`

. - Now open a chrome tab and go to that website. Now you have access to all the articles you've saved with this platform that will look like this
- Now look for the
`enclosure URL`

tag in the XML file. - Copy that
`mp3`

link, and with that create an HTML5 Widget like this

For hashnode users, until hashnode supports audio embedding in articles you have to create new custom widgets every time and embed the HTML in them or you can just add a link to the audio file at the top of the post. Now enjoy.`<audio controls> <source src="https://s3.us-west-2.amazonaws.com/audiblogs/613be84b-99a8-48e1-864d-0e75fbd74eda.mp3" type="audio/mpeg"> </audio>`

A natural-sounding audio experience can help your blog readers to engage more with a post. If you listen to a post while reading it, your mind can not be distracted from the around the world and you would engage more with the post. An audio experience for an engaging article engages people more with the blogs.

]]>Facebook recently launched a machine learning system for users who are visually impaired. This service describes photos in a robotic voice and can recognize over 1200 visual concepts.

This system is called automatic alternative text. The system can recognize and explain what’s happening in a picture, including the relative size and position of people and objects, in any of 45 languages.

In 2016 Facebook launched a similar service that initially learned to describe from 100+ common concepts trained with hand-labeled data like trees, mountains. The following year face recognition was also added with this. This new upgrade to the system extends the previous versions in the following ways

- FB used a weakly supervised approach to train ResNeXt image recognition models on 3.5 billion images and 17000 hashtags that users put with them. Using a similar architecture they applied transfer learning to train linear classification heads to recognize concepts including selfies, national monuments, and foods like rice and French fries.
- They used an existing object detection library to build a Fast R-CNN that recognizes the number, size, and position of various items in an image and determines its primary subject.
- The system starts each description with, “Maybe…,” and it doesn’t describe concepts that it can’t identify reliably. Users can request extra details, and the model will display a page that itemizes a picture’s elements by their position (top, middle, left, or bottom), relative size (primary, secondary, or minor), and category (people, activities, animals, and so on).

World health organization estimates there are around 285 billion people who are visually impaired and around 39 million are blind. People who don’t see well are as reliant on information as anyone — and they represent a sizable market. It should be a crime that large internet companies that don’t make their services accessible for everyone. Online accessibility should be recognized as a right, not a privilege.

]]>I manage the entire blog all by myself. I intend to post Monday morning every week, but if I'm busy with university or something then I'll delay this delivery by a week.

Welcome to the first issue of machine learning weekly. I worked tirelessly throughout the week to find you the best of the best research in machine learning and the field of AI.

Official Article here

When Facebook users scroll through their News Feed, they find all kinds of content — articles, friends’ comments, event invitations, and of course, photos. Most people are able to instantly see what’s in these images, whether it’s their new grandchild, a boat on a river, or a grainy picture of a band on stage. But many users who are blind or visually impaired (BVI) can also experience that imagery, provided it’s tagged properly with alternative text (or “alt text”). A screen reader can describe the contents of these images using a synthetic voice and enable people who are BVI to understand images in their Facebook feed. If you are into machine learning, it's possible that you'll like my articles with in-depth articles and weekly deep learning updates too.

Official article here

We’ve all had a hard drive fail on us, and often it’s as sudden as booting your machine and realizing you can’t access a bunch of your files. It’s not a fun experience. It’s especially not fun when you have an entire data center full of drives that are all important to keeping your business running. What if we could predict when one of those drives would fail, and get ahead of it by preemptively replacing the hardware before the data is lost? This is where the history of predictive drive failure at Datto begins.

Find the unsplash dataset here. Train and test models using the largest collaborative image dataset ever openly shared. The Unsplash Dataset is created by over 200,000 contributing photographers and billions of searches across thousands of applications, uses, and contexts. They have 2 different dataset versions, 1st is the lite version and another one is the full dataset. The Lite version has over 25000+ images (550 MB) and the full version has 2,000,000+ images (25GB, non-commercial usage only).

Find the pdf here

In this paper, a novel learning-based network, named DeepDT, is proposed to reconstruct the surface from Delaunay triangulation of the point cloud. DeepDT learns to predict inside/outside labels of Delaunay tetrahedrons directly from a point cloud and corresponding Delaunay triangulation. The local geometry features are first extracted from the input point cloud and aggregated into a graph deriving from the Delaunay triangulation. Then a graph filtering is applied to the aggregated features in order to add structural regularization to the label prediction of tetrahedrons. Due to the complicated spatial relations between tetrahedrons and the triangles, it is impossible to directly generate ground truth labels of tetrahedrons from the ground truth surface. Here researchers proposed a multilabel supervision strategy that votes for the label of a tetrahedron with labels of sampling locations inside it. The proposed DeepDT can maintain abundant geometry details without generating overly complex surfaces , especially for inner surfaces of open scenes. Meanwhile, the generalization ability and time consumption of the proposed method is acceptable and competitive compared with the state-of-the-art methods. Experiments demonstrate the superior performance of the proposed DeepDT.

]]>One of my favorite data structure is binary heaps. In this article I'll show you what is heap and how to make one with python and in the end I'll show you sorting technique that we can get for free just by building a heap.

A priority queue is a queue where the most important element is always at the front. The queue can be a max-priority queue (largest element first) or a min-priority queue (smallest element first).

So as a data structure designer you have the following options to design a priority queue:

- An max sorted array or min-sorted array, but downside is inserting new items is slow because they must be inserted in sorted order.
- or an binary heap (max heap or min heap)

Now the question arises what are heaps? The heap is a natural data structure for a priority queue. In fact, the two terms are often used as synonyms. A heap is more efficient than a sorted array because a heap only has to be partially sorted. All heap operations are in the order of \(\log\) or linear.

Examples of algorithms that can benefit from a priority queue implemented as heap

- Dijkstra's algorithm for graph searching uses a priority queue to calculate the minimum cost.
- A* pathfinding for artificial intelligence.
- Huffman coding for data compression. This algorithm builds up a compression tree. It repeatedly needs to find the two nodes with the smallest frequencies that do not have a parent node yet.
- Heap sort.

First we need to design what our heaps should do design wise. It should have some APIs to

- Build an binary heap right from a unsorted pile of numbers.
- Add a new number while maintaining the heap property with few swaps
- Find a minimum or a maximum in the heap
- Can remove that minimum or maximum from the heap and rearrange the heap to maintain it's heap property.

With design of the heap software out of the way let's get to coding. I've built AKDSFramework which is a great resource for data structure and algorithm designs, I'll use my framework to show you building a heap.

Let's first import heap class from AKDSFramework

```
from AKDSFramework.structure import MaxHeap, MinHeap
```

Now let’s build a max heap with 15 values.

```
mxheap = MaxHeap([data**2 for data in range(15)])
```

Now it’s important to call the build method on the heap to build the heap from an unsorted array of numbers. If the build is not done, printing and doing operations on heap will not be valid and will generate `HeapNotBuildError`

. So always build your heap with `.heap()`

method if you caused any change in the heap structure. Each time calling `.build()`

method if there is one element of heap violation it will use \(O(\log n)\) time otherwise it's a linear operation for a `n`

number of unordered elements.

```
mxheap.build()
# Now add few elements to the heap
mxheap.add(12)
mxheap.add(4)
# As the heap structure is changed so we have to call .build() again
mxheap.build()
```

Now let's see the heap in a beautiful structure which is easy to understand.

```
mxheap.prettyprint()
```

Now here is how the heap looks:

Similarly you can implement the min heap by yourself.

As you can see for a max heap every time after each build you'll get the maximum element from the head of the heap in constant \(O(1)\) time. And you build the heap everytime (`n`

times) after removing the max item you'll have a sorted array sorted in \(O(n \log n)\) times.

Let's implement that with the help of min heaps:

I've already implemented heap sort with min heap in AKDSFramework

```
from AKDSFramework.applications.sorting import heapsort
import random
array = [random.randint(1, 100) for _ in range(10)]
print(heapsort(array, visualize=False))
```

This return the sorted array like this `[23, 32, 37, 51, 55, 57, 59, 63, 78, 93]`

.
Try to implement this by yourself if you get stuck here is a source code for implementing the heap sort with the built-in min heap API in AKDSFramework.

```
def heapsort(array, visualize=False):
r"""
Heapsort implementation with min heap from AKDSFramework. Running time: :math:`O(N \log (n))`
Args:
- ``array`` (list): List of elements
- ``vizualize`` (bool): Marked as False by default. If you want to vizualize set this as True
"""
if visualize:
iteration = 0
ret = []
mnheap = MinHeap(array)
mnheap.build()
while len(mnheap) >= 1:
if len(mnheap) == 1:
ret.append(mnheap[0])
break
# O(1) time access of minimum element
root = mnheap.get_root()
ret.append(root)
# O(log n) operation
mnheap.delete_root()
# Constant operation, deleting at the beginning,
# by this time you need to call .build() again to
# rebuild the heap property
mnheap.build() # O(log N) for a single violation
if visualize:
print("-"*40)
print(f'End of Iteration: {iteration}')
print(f'Currently heap: {mnheap}')
print(f'Our returning array: {ret}')
iteration += 1
return ret
```

If you want to implement heaps all by yourself I'd recommend you to check out the following resources:

- Heaps on Wikipedia)
- MIT Lecture on heaps
- Source code of the AKDSFramework's Min and Max Heap implementations here. Implementations are based on MIT lecture video.

If you find this helpful please subscribe to my newsletter. Please feel free to reach out to me on twitter.

]]>As you already know calculating big O is a big part of what we do to approximate the running time of an algorithm in worst cases. But most of it is done by hand tracing step by step manually. I'm going to propose an analytical approach to compute big O with respect to one expanding variable. Let's see what I mean with some examples

- Sorting a sequence of numbers (Big O is \(O(n \log n)\) with respect to the sequence's length (n))
- Finding the maximum element of a given array is \(O(n)\) with respect the length of the array.
- Finding the last element of a singly linked list is \(O(n)\) with respect to the length of the list.

So in the above cases I'm calling the sequence of numbers the "expanding variables" because we are calculating how the algorithm would perform when these "expanding variables" grows towards big sizes.

Now let's create a Big O analyzer.

Our big O analyser system has these following parts

- A function that would make a dictionary and record how much time the function is taking for different size of inputs.
- Another function that would generate different size of inputs that can be fed into the function. Let's call it a generator.
- Another function that would interpret the execution times and fit the times into a definitive time complexity with respect to the expanding variable.

Let's see an example to clarify this thing:

Let's say we are tasked to find the complexity of the python function `sorted()`

. Now we identify what's our expanding variable?

For the function sorted it sorts a sequence of data, so the expanding variable would be the sequence of numbers. So now let's import an generator that is built into AKDSFramework

```
from AKDSFramework.applications.complexity_analysis.data_generators import integer_sequence
```

Now armed with integer_sequence that can generate sequence of integers with length of any number we create an instance of the `integer_sequence`

like this:

```
int_generator = lambda n: integer_sequence(lb=1, ub=200000, size=n)
```

Now this int_generator can create a random sequence of length `n`

and individual elements are ranging from 1 to 200000.

Now our job is to make a dictionary and record how much time the function is taking for different size of inputs. From the previous piece of code we already know that we can generate different size of inputs. Now it's time to create the dictionary. For that we need to do the followings

```
from AKDSFramework.applications.complexity_analysis.analyser import runtimedict
```

Now we feed the function and any other keyword arguments for the function into `runtimedict`

method.

`runtimedict`

method takes in a few arguments these are:

`func`

: The function for which you want to make the execution time dictionary.`pumping_lower_bound`

: Lowest size of the expanding variable. For example if you have to find the execution time for some size of arrays from where you would start.`pumping_upper_bound`

: Largest size of the expanding variable. For example if you have to find the execution time for some size of arrays where you would stop.`total_measurements`

: From lowest size of array to largest size of array how many measurements you want to do?`pumping`

: Among all the keyword arguments what variable needs to be pumped meaning what arguments is the expanding variable. Put the name of the variable in strings.- **kwargs: All the arguments of the functions.

Let's take the example of `sorted`

. Sorted function takes in `iterable`

as the keyword argument for the sequence of numbers so to make the run time dictionary we write this:

- We'll record 200 measurements.
- Our array size will start from 1000
- Our array size will end to 5000

So the code would be

```
# The integer generator from before
int_generator = lambda n: int_generator(1, 200000, n)
# And the Run time dictionary
rtdc = runtimedict(sorted, 1000, 5000, 200, pumping='iterable', iterable=int_generator)
```

Now to fit the complexity we need the following lines of code:

```
from AKDSFramework.applications.complexity_analysis.analyser import run_inference_on_complexity
run_inference_on_complexity(rtdc)
```

```
Calculating complexity: 100%|██████████| 7/7 [00:00<00:00, 2618.63it/s]
O(N log N)
```

As you can see that our analysis of sorted function is \(O(n \log n)\) which is actually true.

Now let's take another example of bubble sort and insertion sort. Both are order \(O(n^2)\)

```
from AKDSFramework.applications.sorting import bubblesort, insertionsort
rtdc = runtimedict(insertionsort, 10, 2000, 200, pumping='array', array=int_generator, visualize=False, maintain_iter_dict=False)
run_inference_on_complexity(rtdc)
```

```
Processing: 100%|██████████| 200/200 [00:12<00:00, 16.17it/s]
Calculating complexity: 100%|██████████| 7/7 [00:00<00:00, 2285.37it/s]
O(N^2)
```

```
rtdc = runtimedict(bubblesort, 1000, 5000, 200, pumping='array', array=int_generator, visualize=False, maintain_iter_dict=False)
run_inference_on_complexity(rtdc)
```

```
Processing: 100%|██████████| 200/200 [03:23<00:00, 1.02s/it]
Calculating complexity: 100%|██████████| 7/7 [00:00<00:00, 3031.82it/s]
O(N^2)
```

So we can say the big O complexity analysis is working. Let's see the inner workings of this module:

- First we calculate the runtime for different size of array.
- Next we fit the size and time to return the least-squares solution to a linear matrix equation. Now by fitting we mean in separate instances we transform the size to order \(O(n)\) or \(O(n^2)\) or \(O(n^3)\) or \(O(n \log n)\) or \(O(\log n)\) or \(O(c^n)\) then we look at which one is most fitted to a straight line with the time. The most fitted one will be our big O because the time would be in the same order.
- We return the most fitted complexity.

To see which one fits better we use `numpy.linalg.lstsq`

to return the least-squares solution to a linear matrix equation. Returned residual is minimum means that the equation fits better to a linear equation. More about this method here

- First download/clone this repo like git clone
`https://github.com/theroyakash/AKDSFramework.git`

- Now uninstall if any previous version installed
`pip3 uninstall AKDSFramework`

- Now install fresh on your machine
`pip3 install -e AKDSFramework`

This is easier to install but a bit slower in the installation time.
`pip3 install https://github.com/theroyakash/AKDPRFramework/tarball/main`

Now to check whether your installation is completed without error import AKDSFramework

```
import AKDSFramework
print('AKDSFramework Version is --> ' + AKDSFramework.__version__)
```

What you contribute is the only resource behind these material. Please support me on gumroad

]]>Now there is now way you can optimize the function, what you can do instead is that you can store results from a previous computation and reuse those results in a new computation to find solutions to other problem.

- We'll implement finding n-th fibonacci number problem
- We'll find out how much time it takes to compute 40th fibonacci number.
- and at the end we'll make our code 400k times faster. (and yeah you are reading it right)

Let's create world's worst fibonacci series computation code. The algorithm might look like this:

```
FIBONACCI (n):
if n -> 0: f = 0
elif n -> 1: f = 1
else:
f = FIBONACCI(n - 1) + FIBONACCI (n - 2)
return f
```

This is a correct algorithm for fibonacci. But if you see the recurrence relation `T(n) = T(n-1) + T(n-2) + O(1)`

you can see that the code is running in exponential time `O(2^N)`

which is really really bad.

The equivalent python code would be:

```
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n - 1) + fibonacci(n - 2)
```

If you draw a recursion tree you can find that you are computing same computation over and over again in different trees. Let's see what I mean:

```
+--+-----------+-----------+--------+-----------+-----------+--+
| | | | Fib(n) | | | |
+--+-----------+-----------+--------+-----------+-----------+--+
| | | Fib (n-1) | | Fib (n-2) | | |
+--+-----------+-----------+--------+-----------+-----------+--+
| | Fib (n-2) | Fib (n-3) | | Fib (n-3) | Fib (n-4) | |
+--+-----------+-----------+--------+-----------+-----------+--+
```

See for calculating `fib(n)`

you are calculating `Fib (n-1)`

and `Fib (n-2)`

. In a separate computation you are computing `Fib (n-2)`

for that you are computing `Fib (n-3)`

and `Fib (n-4)`

.

If you had `Fib (n-2)`

from the previous computation stored, you wouldn't have to recompute that `Fib (n-2)`

and it's subsequent branches. So you would've saved a lot of time by just not recomputing anything.

Let's without caching how much time it would take to compute `fib(40)`

that is 50th fibonacci number:

```
import time
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibonacci(n - 1) + fibonacci(n - 2)
start = time.perf_counter()
print(fibonacci(40))
end = time.perf_counter()
print(f"Computed in {(end - start) * 1000} ms")
```

Total time for computation is 40.635853995000005 seconds. So our python program is taking 40 seconds to compute fib(40). Now let's store intermediate step's data in a dictionary so that we can retrieve those data at a later time in constant time.

AKDSFramework has a built in decorator for caching purposes. You can find AKDSFramework here. You can pretty much use this on any python function as you like, small-big-has other dependency anything.

If you install it you can get the benchmarking of python programs, caching python functions and implementation of several data structures and algorithms using best practices in it.

If you don't wish to use my package at the end of the blog I'll paste the source code for @cached decorator.

Now let's import the cached decorator from AKDSFramework

```
from AKDSFramework.applications.decorators import cached
```

Now with the cached decorator let's implement the fibonacci series code and see how much time it takes to find fib(40)

```
import time
@cached
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
return fibonacci(n - 1) + fibonacci(n - 2)
start = time.perf_counter()
print(fibonacci(40))
end = time.perf_counter()
print(f"Computed in {(end - start)} seconds")
```

Now it takes around 8.945500000000217e-05 seconds. Which is 400k times faster to compute.

If you don't wish to use our AKDSFramework here is the source code for the caching decorator.

Our cache storage stores unlimited data, but if your program has limited storage you can update the dictionary to hold predefined amount of data and if one data is not used for long enough you can kick it out with pre defined cache replacement policies.

```
def cached(func):
cache = dict()
def caching(*args):
if args in cache:
return cache[args]
result = func(*args)
cache[args] = result
return result
return caching
```

]]>There is couple of way of doing this going through this manually or using some kind of library like cProfile to generate a report on the function's workings.

I've written all necessary code to run this in my package `AKDSFramework`

.
AKDSFramework can be found here. You can pretty much use this on any python function as you like, small-big-has other dependency anything.

If you install it you can get the benchmarking and implementation of several data structures and algorithms using best practices in it.

If you don't wish to use my package at the end of the blog I'll paste the source code for `@benchmark`

decorator.

We gonna see an example of implementation of benchmarking by building a max heap and adding 2 numbers to the heap and again building it.

To make max heaps I'll use AKDSFramework, let's create a heap and build it now with around 600 elements.

```
from AKDSFramework.applications.decorators import benchmark
from AKDSFramework.structure import MaxHeap
@benchmark
def buildHeap(array):
h = MaxHeap(array)
h.build()
h.add(68)
h.add(13)
h.build()
buildHeap([data**2 for data in range(601)])
```

Notice the `@benchmark`

decorator at the beginning of the declaration of the function, that calls cProfile to start calculating what taking what.

Now running the code will output a report in the console like this:

```
3597 function calls (3003 primitive calls) in 0.002 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.002 0.002 <ipython-input-18-1d08ee399432>:4(buildHeap)
2 0.000 0.000 0.002 0.001 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:136(build)
1195/601 0.001 0.000 0.001 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:153(heapify)
1195 0.000 0.000 0.000 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:67(get_left_child)
1195 0.000 0.000 0.000 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:53(get_right_child)
2 0.000 0.000 0.000 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:26(add)
1 0.000 0.000 0.000 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:128(__init__)
1 0.000 0.000 0.000 0.000 /opt/venv/lib/python3.7/site-packages/AKDSFramework/structure/heap.py:21(__init__)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
2 0.000 0.000 0.000 0.000 {built-in method builtins.len}
```

This has all the call's report and how much time it's taking. If you see the second last function call `{method 'append' of 'list' objects}`

see that's called 2 times total as we are appending 2 elements.

So this way you can see how much each function taking time and how many times they are called. If you wish you can reduce the number of calls or use a different approach to solve the part where it's slow.

AKDSFramework's all implementations of data structures and algorithms are super optimized so you can't find any bottle neck when using `@benchmark`

on our function calls.

If you don't wish to install AKDSFramework here is the `@benchmark`

decorator source code:

```
import cProfile
import pstats
import io
def benchmark(func):
"""
AKDSFramework default benchmark profiler. Implemented with cProfile and pstats.
"""
def profiler(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
returnvalue = func(*args, **kwargs)
profiler.disable()
stringIO = io.StringIO()
ps = pstats.Stats(profiler, stream=stringIO).sort_stats("cumulative")
ps.print_stats()
print(stringIO.getvalue())
return returnvalue
return profiler
```

]]>Let's imagine the following task

```
for i in range(10):
print(i)
```

This function will print the number `i`

10 times. Now run this for 100 times so it'll run for 100 times. So you can see that the running time growing linearly with respect to the input size.

So we can write the following table

- 1 unit of time to complete if you have 1 elements.
- 2 unit of time to complete if you have 2 elements.
- 3 unit of time to complete if you have 3 elements.
- 4 unit of time to complete if you have 4 elements.
- 5 unit of time to complete if you have 5 elements.

.....................

- \(n\) unit of time to complete if you have \(n\) elements.

Now let's imagine the following task

```
for i in range(10):
for y in range(10):
print(i)
```

This program has order of \(n^2\) running time because the time of running the function will grow with the square of the input size.

So we can write the following table

- 1 unit of time to complete if you have 1 elements.
- 4 unit of time to complete if you have 2 elements.
- 9 unit of time to complete if you have 3 elements.
- 16 unit of time to complete if you have 4 elements.
- 25 unit of time to complete if you have 5 elements.

.....................

- \(n^2\) unit of time to complete if you have \(n\) elements.

Now similar to these two imagine a function that does the following

- 1 unit of time to complete if you have 2 elements.
- 2 unit of time to complete if you have 4 elements.
- 3 unit of time to complete if you have 8 elements.
- 4 unit of time to complete if you have 16 elements.
- 5 unit of time to complete if you have 32 elements.

.....................

- \(n\) unit of time to complete if you have \(2^n\) elements.

So if you analyze this pattern you'll see that the next iteration of the loop takes half the time the current one is going to take.

When it comes to Asymptotic analysis, we just call \(\log(n)\) which can be basically any base. But since we computer scientists use binary trees, we end up with \(\log_2(n)\) most of the times which we just term \(\log(n)\).

We talked about n log n time in the beginning of the article with is a linearithmic time problem. So you can construct your n log n problems like this

```
a loop that runs n times{
-> a program that runs in log n time complexity
}
```

A loop is running n times and a log n algorithm is running in that loop so the overall algorithm is = \(O(n \log n)\).

You can learn more about this here

]]>I was building a discord BOT that has the feature of sending top news article in a given hour, but the URLs were too long so it was looking bad in a discord chat. So I thought of making an shortener service based on tiny-url to beautify those long a** URLs.

`contextlib`

for utilities for with-statement contexts,- and
`urllib`

module which are built-in. So nothing needed to be installed.

```
import contextlib
from urllib.parse import urlencode
from urllib.request import urlopen
def tinyURLOf(url):
encoded_url = urlencode({'url': url})
request_url = 'http://tinyurl.com/api-create.php?' + str(encoded_url)
with contextlib.closing(urlopen(request_url)) as response:
return response.read().decode('utf-8 ')
```

So now run the function `tinyURLOf(url='YOUR_URL_HERE')`

to get the result back.

So for you to follow this post you need to things:

`numpy`

and- Some free time of yours.

If you have written any deep learning code before you likely have used these activations:

- Softmax
- ReLU, LeakyReLU
- and the good-old Sigmoid activation.

In this post I'll implement all these activation functions with numpy and also the derivative of these for the back-propagation.

Let's get the easy one out first. ReLU is called rectified linear unit, where:

```
class ReLU:
"""Applies the rectified linear unit function element-wise.
ReLU operation is defined as the following
"""
def __call__(self, x):
return np.where(x >= 0, x, 0)
def gradient(self, x):
"""
Computes Gradient of ReLU
Args:
x: input tensor
Returns:
Gradient of X
"""
return np.where(x >= 0, 1, 0)
```

Usage:

```
relu = ReLU()
z = np.array([0.1, -0.4, 0.7, 1])
print(relu(z)) # ---> array([0.1, 0. , 0.7, 1. ])
print(relu.gradient(z)) # ---> array([1, 0, 1, 1])
```

Sigmoid function is defined as the following

The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output.Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable.That means, we can find the slope of the sigmoid curve at any two points. The function is monotonic but function’s derivative is not. The logistic sigmoid function can cause a neural network to get stuck at the training time. The softmax function is a more generalized logistic activation function which is used for multi-class classification.

The element wise `exp`

can be done like the following, and if you calculate the derivative you can find that `d/dx sigmoid(x) = sigmoid(x) *(1- sigmoid(x))`

.

So let's write up the activation function for sigmoid operation:

```
class Sigmoid:
"""
Applies the element-wise function
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- Output: :math:`(N, *)`, same shape as the input
"""
def __call__(self, x):
return 1 / (1 + np.exp(-x))
def gradient(self, x):
r"""Computes Gradient of Sigmoid
.. math::
\frac{\partial}{\partial x} \sigma(x) = \sigma(x)* \left ( 1- \sigma(x)\right)
Args:
x: input tensor
Returns:
Gradient of X
"""
return self.__call__(x) * (1 - self.__call__(x))
```

and the usage would be like this:

```
import numpy as np
z = np.array([0.1, 0.4, 0.7, 1])
sigmoid = Sigmoid()
return_data = sigmoid(z)
print(return_data) # -> array([0.52497919, 0.59868766, 0.66818777, 0.73105858])
print(sigmoid.gradient(z)) # -> array([0.24937604, 0.24026075, 0.22171287, 0.19661193])
```

LeakyReLU operation is similar to the ReLU operation also called the Leaky version of a Rectified Linear Unit. It essentially instead of putting zeros everywhere it sees < 0, it puts an predefined -ve slope like -0.1 or -0.2 etc. You mention an alpha, and it'll put -alpha where X < 0.

The following image shows the difference between ReLU and LeakyReLU

```
class LeakyReLU:
"""Applies the element-wise function:
Args:
- alpha: Negative slope value: controls the angle of the negative slope in the :math:`-x` direction. Default: ``1e-2``
"""
def __init__(self, alpha=0.2):
self.alpha = alpha
def __call__(self, x):
return np.where(x >= 0, x, self.alpha * x)
def gradient(self, x):
"""
Computes Gradient of LeakyReLU
Args:
x: input tensor
Returns:
Gradient of X
"""
return np.where(x >= 0, 1, self.alpha)
```

`tanH`

or hyperbolic tangent activationThe sigmoid maps the output between 0-1 but here `tanH`

maps the output to -1 and 1. The advantage is that the negative inputs will be mapped strongly negative and the zero inputs will be mapped near zero in the tanh graph.

The function is differentiable. And the function is monotonic while its derivative is not monotonic. The `tanH`

function is mainly used classification between two classes.

Let's implement this in code

```
class TanH:
def __call__(self, x):
return 2 / (1 + np.exp(-2 * x)) - 1
def gradient(self, x):
return 1 - np.power(self.__call__(x), 2)
```

Softmax loss is used when multi-class classifications are performed. So here is the code:

```
class Softmax:
def __call__(self, x):
e_x = np.exp(x - np.max(x, axis=-1, keepdims=True))
return e_x / np.sum(e_x, axis=-1, keepdims=True)
def gradient(self, x):
p = self.__call__(x)
return p * (1 - p)
```

]]>`.detach()`

method?
PyTorch's detach method works on the tensor class.

`tensor.detach()`

creates a tensor that shares storage with tensor that does not require gradient. `tensor.clone()`

creates a copy of tensor that imitates the original tensor's `requires_grad`

field.

You should use `detach()`

when attempting to remove a tensor from a computation graph, and clone as a way to copy the tensor while still keeping the copy as a part of the computation graph it came from.

Let's see that in an example here

```
X = torch.ones((28, 28), dtype=torch.float32, requires_grad=True)
y = X**2
z = X**2
result = (y+z).sum()
torchviz.make_dot(result).render('Attached', format='png')
```

And now one with the detach.

```
X = torch.ones((28, 28), dtype=torch.float32, requires_grad=True)
y = X**2
z = X.detach()**2
result = (y+z).sum()
torchviz.make_dot(result).render('Attached', format='png')
```

As you can see now that the branch of computation with `x**2`

is no longer tracked. This is reflected in the gradient of the result which no longer records the contribution of this branch

Welcome to theroyakash publication, here you’ll get the latest in publications from theroyakash on whatever I'm working on. I also have a subreddit for announcement purposes & I have a discord server. Join and connect with me there. Cool researches, projects and other things coming very soon.

Computer scientist theroyakash researches computer vision and artificial intelligence. This is the publications from theroyakash.

- Announcement subreddit: r/theroyakash
- Join our discord server here.