**The Object Oriented Programming paradigm** became popular in the ’60s and ‘70s, in languages like Lisp and Smalltalk. Such features were also added to existing languages like Ada, Fortran and Pascal.

Python is an object oriented programming language, though it doesn’t support strong encapsulation.

Introductory topics in object-oriented programming in Python — and more generally — include things like defining classes, creating objects, instance variables, the basics of inheritance, and maybe even some special methods like `__str__`

. …

**The perceptron** is a classification algorithm. Specifically, it works as a linear binary classifier. It was invented in the late 1950s by Frank Rosenblatt.

The perceptron basically works as a threshold function — non-negative outputs are put into one class while negative ones are put into the other class.

Though there’s a lot to talk about when it comes to neural networks and their variants, we’ll be discussing a specific problem that highlights the major differences between a single layer perceptron and one that has a few more layers.

The Perceptron

Structure and Properties

Evalutation

Training algorithm2d Xor problem…

**Dynamic programming**, or DP, is an optimization technique. It is used in several fields, though this article focuses on its applications in the field of algorithms and computer programming. Its a topic often asked in algorithmic interviews.

Since DP isn’t very intuitive, most people (myself included!) often find it tricky to model a problem as a dynamic programming model. In this post, we’ll discuss when we use DP, followed by its types and then finally work through an example.

When is DP used?- Overlapping Sub-problems

- Optimal SubstructureThe Two kinds of DP- The top-down approach - The…

**Maximum Likelihood Estimation**, or MLE, for short, is the process of estimating the parameters of a distribution that maximize the likelihood of the observed data belonging to that distribution.

Simply put, when we perform MLE, we are trying to **find the distribution that best fits our data. **The resulting value of the distribution’s parameter is called the **maximum likelihood estimate.**

MLE is a very prominent frequentist technique. Many conventional machine learning algorithms work with the principles of MLE. For example, the best-fit line in linear regression calculated using least squares is identical to the result of MLE.

Before we move…

**The Defective Chessboard problem, **also known as the **Tiling Problem** is an interesting problem. It is typically solved with a “divide and conquer” approach. The algorithm has a time complexity of *O(n²)**.*

Given a *n *by *n* board where *n* is of form 2^k where k >= 1 (Basically, n is a power of 2 with minimum value as 2). The board has one missing square). Fill the board using trionimos. A trionimo is an L-shaped tile is a 2 × 2 block with one cell of size 1×1 missing.

Solving the problem efficiently isn’t the goal of this post…

Over the last few days, I’ve received several messages and emails on the subject and after responding to some, I decided to compile everything into a single post.

The Google Summer of Code (GSoC, for short) is a program whose primary goal is to boost interest in open-source. The program targets college and university students and gives them the opportunity to contribute to open-source organizations of their choice over the summer. Potential candidates are required to write a proposal detailing the work they would be doing along with a timeline with specific deadlines for each sub-task. The coding period is…

GSoC’20 Mentor at CHAOSS | Former Intern at Samsung Research | CS Undergrad at BITS Pilani | polaris000.github.io