Friday, 24 January 2025

 🙌Introducing GitHub Kudos

One thing about being a developer, is how frustrating the job search process can be. One of the things I encountered was the request for recommendations from other colleagues.

I am lately quite active in GitHub, I created some small libraries, and I also contributed to others, so, in case any recruiter checks my GitHub profile (most recruiters don't do), they can see my repositories, but there’s no straightforward way to showcase who you’ve worked with or who recommends working with you.

App GitHub Kudos

GitHub lacks a simple, personalized way to share personalized shoutouts, so I decided to build a solution: GitHub Kudos.

GitHub Kudos is an app that generates an image featuring a GitHub user's avatar with their recommendation. My intention is that these images can be displayed in GitHub profiles and/or personal websites to share how other developers recommend you (or who you recommend).

If you want to see how it looks, you can check my profile https://github.com/manuelarte#-people-i-recommend

Or if you want to see how the image looks by itself https://github-kudos.com/manuelarte/kudos/octocat



I don’t think this will completely eliminate recruiters asking for recommendations, but I hope it makes it easier to share and showcase them. 😊

How Does It Work


The app is quite simple, let's say Alice (github login alice) wants to recommend Bob (github username bob), then this are the steps:

  • Alice creates a repository called github-kudos (alice/github-kudos).
  • Alice creates a file bob.md in which she writes her recommendation to Bob.
  • Either Alice or Bob can access the kudos image by https://github-kudos.com/alice/kudos/bob
(I created a template repository in GitHub to facilitate this setup)

Wednesday, 8 January 2025

Mancala and IA

🤖My First AI

I was always curious how an AI in a game works, to be honest it is quite a complex topic, and I thought it could be interesting to learn how to do an AI for a small board game.

Instead of choosing the typical Tic Tac Toe, I decided to use Mancala. Mancala is a board game originally from Jordan whose rules are not very complicated and I though it could be a good start to learn how to do my first AI.



The goal of this post is not to explain how the game Mancala works, but rather focus on how I implemented the AI algorithm to play the game. If you aren't familiar with the game, I recommend you to read its rules and play one or two games to get a feeling.

Q-Learning

There are several algorithms to create an AI, but the algorithm I will use to build the AI is Q-Learning. As a summary, Q-learning is a reinforcement learning algorithm that finds an optimal action. It helps an agent learn to maximize the total reward over time through repeated interactions with the environment, even when the model of that environment is not known.
In the Mancala's case, it will store the number of beads on every bowl and its possible actions and consequences.

How the algorithm works

Imagine the following situation (state) S1 (the player mancalas are on the sides, in the example the player 2' mancala contains 5 seeds, and the player 1' mancala contains 8 seeds. The 12 pits are placed in two rows with 6 pits each row):

 _____Player2_____________________________________________________
/  _____    ____    ____    ____    ____    ____    ____          \
/ |     |  [____]  [____]  [__1_]  [____]  [____]  [____]   ____  \
/ |  5  |                                                  |    | \
/ |_____|   ____    ____    ____    ____    ____    ____   |  8 | \
/          [____]  [____]  [____]  [____]  [____]  [__1_]  |____| \
/                                                                 \
/____________________________________________________*Player1_____\
The Player1 has only one option (action) A5. If it plays that action, then the game is finished and Player1 wins.
Then, the algorithm will save that, all previous states and actions that leads to this final state and action,makes it wins.
(S1, A5) = 1

Before this state, we had a previous one, e.g.: 

 _____Player2_____________________________________________________
/  _____    ____    ____    ____    ____    ____    ____          \
/ |     |  [____]  [____]  [____]  [__1_]  [____]  [____]   ____  \
/ |  5  |                                                  |    | \
/ |_____|   ____    ____    ____    ____    ____    ____   |  8 | \
/          [____]  [____]  [____]  [____]  [_1__]  [____]  |____| \
/                                                                 \
/____________________________________________________*Player1_____\

 In this state S2, the Player1 has one action, A4, then, if it plays that action, the game is still on, so he would save something like this:
(S2, A4) = 0

So basically it will give a score to the different states and actions.

So by playing many games (training), the knowledge (states and actions) of the algorithm increases and it's able to make intelligent decisions.


Playing a game


If you want to try it out, don't hesitate to check out my repo: https://github.com/manuelarte/mancala-go

🔗References:


 🙌Introducing GitHub Kudos One thing about being a developer, is how frustrating the job search process can be. One of the things I encount...