Understand the Security of One Time Pad and how to Implement it in Python

What is a One Time Pad?

A One Time Pad is a information-theoretical secure encryption function. But what does that mean?

On a high level the One Time Pad is a simple XOR function that takes the input and xor’s that with a key-stream.

One Time Pad illustrated.
One Time Pad illustrated.

Encryption and decryption are identical.

  • Encryption: Takes the plaintext and the key-stream and xor’s it to get the cipher text.
  • Decryption: Takes the cipher text and the (same) key-stream and xor’s it to get the plaintext.

Some requirements of the One Time Pad are.

  • Key-stream should only be used once.
  • Key-stream should only be known by the sender and the receiver.
  • Key-stream should be generated by true randomness

Hence, the requirement are only on the key-stream, which obviously is the only input to the algorithm.

The beauty of the algorithm is the simplicity.

Understand the Security of the One Time Pad

The One Time Pad is information-theoretical secure.

That means, that even if the evil adversary had infinite computing power, it could not break it.

One Time Pad is unbreakable - it is information-theoretical secure.
One Time Pad is unbreakable – it is information-theoretical secure.

The simples way to understand why that is the case is the following. If an adversary catches an encrypted message, which has length, say 10 characters. It can decrypt to any message of length 10.

The reason is, that the key-stream can be anything and is a long as the message itself. That implies, that the plaintext can be possible message of 10 characters.

If the key-stream is unknown, then the cipher text can decrypt to any message.
If the key-stream is unknown, then the cipher text can decrypt to any message.

Implementation in Python

Obviously, we have a dilemma. We cannot generate a key like that in Python.

The actual implementation of the One Time Pad is done by a simple xor.

def xor_bytes(key_stream, message):
    return bytes([key_stream[i] ^ message[i] for i in range(length)])

Of course, this requires that the key_stream and message have the same length.

It also leaves out the problem of where the key_stream comes from. The problem is, that you cannot create a key_stream with the required properties in Python.

Demonstrate the security in Python

If you were to receive a message encrypted by a One Time Pad, then for any guess of the plaintext, there is a matching key-stream to get it.

See the code for better understanding it.

def xor_bytes(key_stream, message):
    length = min(len(key_stream), len(message))
    return bytes([key_stream[i] ^ message[i] for i in range(length)])


cipher
# cipher is the cipher text
# len(cipher) = 10

# If we guess that the plaintext is "DO ATTACK"
# Then the corresponding key_stream can be computes as follows
message = "DO ATTACK"
message = message.encode()
key_stream = xor_bytes(message, cipher)


# Similar, if we guess the plaintext is "NO ATTACK"
# Then the corresponding key_stream can be computes as follows
message = "NO ATTACK"
message = message.encode()
guess_key = xor_bytes(message, cipher)

Conclusion

While One Time Pads are ideal encryption system, they are not practical. The reason is, that there is no efficient way to generate and distribute a true random key-stream, which is only used once and not known by others than sender and receiver.

The Stream Cipher is often used, as a compromise for that. Examples of stream ciphers are A5/1.

A Simple 7 Step Guide to Implement a Prediction Model to Filter Tweets Based on Dataset Interactively Read from Twitter

What will we learn in this tutorial

  • How Machine Learning works and predicts.
  • What you need to install to implement your Prediction Model in Python
  • A simple way to implement a Prediction Model in Python with persistence
  • How to simplify the connection to the Twitter API using tweepy
  • Collect the training dataset from twitter interactively in a Python program
  • Use the persistent model to predict the tweets you like

Step 1: Quick introduction to Machine Learning

Machine Learning: Input to Learner is Features X (data set) with Targets Y. The Learner outputs a Model, which can predict (Y) future inputs (X).
Machine Learning: Input to Learner is Features X (data set) with Targets Y. The Learner outputs a Model, which can predict (Y) future inputs (X).
  • The Leaner (or Machine Learning Algorithm) is the program that creates a machine learning model from the input data.
  • The Features X is the dataset used by the Learner to generate the Model.
  • The Target Y contains the categories for each data item in the Feature X dataset.
  • The Model takes new inputs X (similar to those in Features) and predicts a target Y, from the categories in Target Y.

We will implement a simple model, that can predict Twitter feeds into two categories: allow and refuse.

Step 2: Install sklearn library (skip if you already have it)

The Python code will be using the sklearn library.

You can install it, simply write the following in the command line (also see here).

pip install scikit-learn

Alternatively, you might want to install it locally in your user space.

pip install scikit-learn --user

Step 3: Create a simple Prediction Model in Python to Train and Predict on tweets

The implementation accomplishes the the machine learning model in a class. The class has the following features.

  • create_dataset: It creates a dataset by taking a list of data that are representing allow, and a list of data that represent the reject. The dataset is divided into features and targets
  • train_dataset: When your dataset is loaded it should be trained to create the model, consisting of the predictor (transfer and estimator)
  • predict: Is called after the model is trained. It can predict an input if it is in the allow category.
  • persist: Is called to save the model for later use, such that we do not need to collect data and train it again. It should only be called after dataset has been created and the model has been train (after create_dataset and train_dataset)
  • load: This will load a saved model and be ready to predict new input.
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
import joblib


class PredictionModel:
    def __init__(self):
        self.predictor = {}
        self.dataset = {'features': [], 'targets': []}
        self.allow_id = 0
        self.reject_id = 1

    def create_dataset(self, allow_data, reject_data):
        features_y = allow_data + reject_data
        targets_x = [self.allow_id]*len(allow_data) + [self.reject_id]*len(reject_data)
        self.dataset = {'features': features_y, 'targets': targets_x}

    def train_dataset(self):
        x_train, x_test, y_train, y_test = train_test_split(self.dataset['features'], self.dataset['targets'])

        transfer = TfidfVectorizer()
        x_train = transfer.fit_transform(x_train)
        x_test = transfer.transform(x_test)

        estimator = MultinomialNB()
        estimator.fit(x_train, y_train)

        score = estimator.score(x_test, y_test)
        self.predictor = {'transfer': transfer, 'estimator': estimator}

    def predict(self, text):
        sentence_x = self.predictor['transfer'].transform([text])
        y_predict = self.predictor['estimator'].predict(sentence_x)
        return y_predict[0] == self.allow_id

    def persist(self, output_name):
        joblib.dump(self.predictor['transfer'], output_name+".transfer")
        joblib.dump(self.predictor['estimator'], output_name+".estimator")

    def load(self, input_name):
        self.predictor['transfer'] = joblib.load(input_name+'.transfer')
        self.predictor['estimator'] = joblib.load(input_name+'.estimator')

Step 4: Get a Twitter API access

Go to https://developer.twitter.com/en and get your consumer_key, consumer_secret, access_token, and access_token_secret.

api_key = {
    'consumer_key': "",
    'consumer_secret': "",
    'access_token': "",
    'access_token_secret': ""
}

Also see here for a deeper tutorial on how to get them if in doubt.

Step 5: Simplify your Twitter connection

If you do not already have the tweepy library, then install it by.

pip install tweepy

As you will only read tweets from users, the following class will help you to simplify your code.

import tweepy


class TwitterConnection:
    def __init__(self, api_key):
        # authentication of consumer key and secret
        auth = tweepy.OAuthHandler(api_key['consumer_key'], api_key['consumer_secret'])

        # authentication of access token and secret
        auth.set_access_token(api_key['access_token'], api_key['access_token_secret'])
        self.api = tweepy.API(auth)

    def get_tweets(self, user_name, number=0):
        if number > 0:
            return tweepy.Cursor(self.api.user_timeline, screen_name=user_name, tweet_mode="extended").items(number)
        else:
            return tweepy.Cursor(self.api.user_timeline, screen_name=user_name, tweet_mode="extended").items()
  • __init__: The class sets up the Twitter API in the init-function.
  • get_tweets: Returns the tweets from a user_name (screen_name).

Step 6: Collect the dataset (Features X and Target Y) from Twitter

To simplify your life you will use the above TwitterConnection class and and PredictionModel class.

def get_features(auth, user_name, output_name):
    positives = []
    negatives = []
    twitter_con = TwitterConnection(auth)
    tweets = twitter_con.get_tweets(user_name)
    for tweet in tweets:
        print(tweet.full_text)
        print("a/r/e (allow/reject/end)? ", end='')
        response = input()
        if response.lower() == 'y':
            positives.append(tweet.full_text)
        elif response.lower() == 'e':
            break
        else:
            negatives.append(tweet.full_text)
    model = PredictionModel()
    model.create_dataset(positives, negatives)
    model.train_dataset()
    model.persist(output_name)

The function reads the tweets from user_name and prompts for each one of them whether it should be added to tweets you allow or reject.

When you do not feel like “training” your set more (i.e. collect more training data), then you can press e.

Then it will create the dataset and train it to finally persist it.

Step 7: See how good it predicts your tweets based on your model

The following code will print the first number tweets that your model will allow by user_name.

def fetch_tweets_prediction(auth, user_name, input_name, number):
    model = PredictionModel()
    model.load(input_name)
    twitter_con = TwitterConnection(auth)
    tweets = twitter_con.get_tweets(user_name)
    for tweet in tweets:
        if model.predict(tweet.full_text):
            print(tweet.full_text)
            number -= 1
        if number < 0:
            break

Then your final piece is to call it. Remember to fill out your values for the api_key.

api_key = {
    'consumer_key': "",
    'consumer_secret': "",
    'access_token': "",
    'access_token_secret': ""
}

get_features(api_key, "@cnnbrk", "cnnbrk")
fetch_tweets_prediction(api_key, "@cnnbrk", "cnnbrk", 10)

Conclusion

I trained my set by 30-40 tweets with the above code. From the training set it did not have any false positives (that is an allow which was a reject int eh dataset), but it did have false rejects.

The full code is here.

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
import joblib
import tweepy


class PredictionModel:
    def __init__(self):
        self.predictor = {}
        self.dataset = {'features': [], 'targets': []}
        self.allow_id = 0
        self.reject_id = 1

    def create_dataset(self, allow_data, reject_data):
        features_y = allow_data + reject_data
        targets_x = [self.allow_id]*len(allow_data) + [self.reject_id]*len(reject_data)
        self.dataset = {'features': features_y, 'targets': targets_x}

    def train_dataset(self):
        x_train, x_test, y_train, y_test = train_test_split(self.dataset['features'], self.dataset['targets'])

        transfer = TfidfVectorizer()
        x_train = transfer.fit_transform(x_train)
        x_test = transfer.transform(x_test)

        estimator = MultinomialNB()
        estimator.fit(x_train, y_train)

        score = estimator.score(x_test, y_test)
        self.predictor = {'transfer': transfer, 'estimator': estimator}

    def predict(self, text):
        sentence_x = self.predictor['transfer'].transform([text])
        y_predict = self.predictor['estimator'].predict(sentence_x)
        return y_predict[0] == self.allow_id

    def persist(self, output_name):
        joblib.dump(self.predictor['transfer'], output_name+".transfer")
        joblib.dump(self.predictor['estimator'], output_name+".estimator")

    def load(self, input_name):
        self.predictor['transfer'] = joblib.load(input_name+'.transfer')
        self.predictor['estimator'] = joblib.load(input_name+'.estimator')


class TwitterConnection:
    def __init__(self, api_key):
        # authentication of consumer key and secret
        auth = tweepy.OAuthHandler(api_key['consumer_key'], api_key['consumer_secret'])

        # authentication of access token and secret
        auth.set_access_token(api_key['access_token'], api_key['access_token_secret'])
        self.api = tweepy.API(auth)

    def get_tweets(self, user_name, number=0):
        if number > 0:
            return tweepy.Cursor(self.api.user_timeline, screen_name=user_name, tweet_mode="extended").items(number)
        else:
            return tweepy.Cursor(self.api.user_timeline, screen_name=user_name, tweet_mode="extended").items()


def get_features(auth, user_name, output_name):
    positives = []
    negatives = []
    twitter_con = TwitterConnection(auth)
    tweets = twitter_con.get_tweets(user_name)
    for tweet in tweets:
        print(tweet.full_text)
        print("y/n/e (positive/negative/end)? ", end='')
        response = input()
        if response.lower() == 'y':
            positives.append(tweet.full_text)
        elif response.lower() == 'e':
            break
        else:
            negatives.append(tweet.full_text)
    model = PredictionModel()
    model.create_dataset(positives, negatives)
    model.train_dataset()
    model.persist(output_name)


def fetch_tweets_prediction(auth, user_name, input_name, number):
    model = PredictionModel()
    model.load(input_name)
    twitter_con = TwitterConnection(auth)
    tweets = twitter_con.get_tweets(user_name)
    for tweet in tweets:
        if model.predict(tweet.full_text):
            print("POS", tweet.full_text)
            number -= 1
        else:
            pass
            # print("NEG", tweet.full_text)
        if number < 0:
            break

api_key = {
    'consumer_key': "_",
    'consumer_secret': "_",
    'access_token': "_-_",
    'access_token_secret': "_"
}

get_features(api_key, "@cnnbrk", "cnnbrk")
fetch_tweets_prediction(api_key, "@cnnbrk", "cnnbrk", 10)

How To Get Started with a Predictive Machine Learning Program in Python in 5 Easy Steps

What will you learn?

  • How to predict from a dataset with Machine Learning
  • How to implement that in Python
  • How to get data from Twitter
  • How to install the necessary libraries to do Machine Learning in Python

Step 1: Install the necessary libraries

The sklearn library is a simple and efficient tools for predictive data analysis.

You can install it by typing in the following in your command line.

pip install sklearn

It will most likely install a couple of more needed libraries.

Collecting sklearn
  Downloading sklearn-0.0.tar.gz (1.1 kB)
Collecting scikit-learn
  Downloading scikit_learn-0.23.1-cp38-cp38-macosx_10_9_x86_64.whl (7.2 MB)
     |████████████████████████████████| 7.2 MB 5.0 MB/s 
Collecting numpy>=1.13.3
  Downloading numpy-1.18.4-cp38-cp38-macosx_10_9_x86_64.whl (15.2 MB)
     |████████████████████████████████| 15.2 MB 12.6 MB/s 
Collecting joblib>=0.11
  Downloading joblib-0.15.1-py3-none-any.whl (298 kB)
     |████████████████████████████████| 298 kB 8.1 MB/s 
Collecting threadpoolctl>=2.0.0
  Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB)
Collecting scipy>=0.19.1
  Downloading scipy-1.4.1-cp38-cp38-macosx_10_9_x86_64.whl (28.8 MB)
     |████████████████████████████████| 28.8 MB 5.8 MB/s 
Using legacy setup.py install for sklearn, since package 'wheel' is not installed.
Installing collected packages: numpy, joblib, threadpoolctl, scipy, scikit-learn, sklearn
    Running setup.py install for sklearn ... done
Successfully installed joblib-0.15.1 numpy-1.18.4 scikit-learn-0.23.1 scipy-1.4.1 sklearn-0.0 threadpoolctl-2.1.0

As in my installation with numpy, joblib, threadpoolctl, scipy, and scikit-learn.

Step 2: The dataset

The machine learning algorithm needs a dataset to train on. To make this tutorial simple, I only used a limited set. I looked through the top tweets from CNN Breaking and categorised them in positive and negative tweets (I know it can be subjective).

negative = [
    "Protesters who were marching from Minneapolis to St. Paul were tear gassed by police as they tried to cross the Lake Street Marshall Bridge ",
    "The National Guard has been activated in Washington, D.C. to assist police handling protests around the White House",
    "Police have been firing tear gas at the protesters near the 5th Precinct in Minneapolis, where some in the crowd have responded with projectiles of their own",
    "Texas and Colorado have activated the National Guard respond to protests",
    "The mayor of Rochester, New York, has declared a state of emergency and ordered a curfew from 9 p.m. Saturday to 7 a.m. Sunday",
    "Cleveland, Ohio, has enacted a curfew that will go into effect at 8 p.m. Saturday and last through 8 a.m. Sunday",
    "A police car appears to be on fire in Los Angeles. Police officers are holding back a line of demonstrators to prevent them from getting close to the car."
            ]

positive = [
    "Two NASA astronauts make history with their successful launch into space aboard a SpaceX rocket",
    "After questionable weather, officials give the all clear for the SpaceX launch",
    "NASA astronauts Bob Behnken and Doug Hurley climb aboard SpaceX's Crew Dragon spacecraft as they prepare for a mission to the International Space Station",
    "New York Gov. Andrew Cuomo signs a bill giving death benefits to families of frontline workers who died battling the coronavirus pandemic"
]

Step 3: Train the model

The data needs to be categorised to be fed into the training algorithm. Hence, we will make the required structure of the data set.

def prepare_data(positive, negative):
    data = positive + negative
    target = [0]*len(positive) + [1]*len(negative)
    return {'data': data, 'target': target}

The actual training is done by using the sklearn library.

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split

def train_data_set(data_set):
    x_train, x_test, y_train, y_test = train_test_split(data_set['data'], data_set['target'])

    transfer = TfidfVectorizer()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    estimator = MultinomialNB()
    estimator.fit(x_train, y_train)

    score = estimator.score(x_test, y_test)
    print("score:\n", score)
    return {'transfer': transfer, 'estimator': estimator}

Step 4: Get some tweets from CNN Breaking and predict

In order for this step to work you need to set up tokens for the twitter api. You can follow this tutorial in order to do that.

When you have that you can use the following code to get it running.

import tweepy


def setup_twitter():
    consumer_key = "REPLACE WITH YOUR KEY"
    consumer_secret = "REPLACE WITH YOUR SECRET"
    access_token = "REPLACE WITH YOUR TOKEN"
    access_token_secret = "REPLACE WITH YOUR TOKEN SECRET"

    # authentication of consumer key and secret
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)

    # authentication of access token and secret
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)
    return api


def mood_on_cnn(api, predictor):
    stat = [0, 0]
    for status in tweepy.Cursor(api.user_timeline, screen_name='@cnnbrk', tweet_mode="extended").items():
        sentence_x = predictor['transfer'].transform([status.full_text])
        y_predict = predictor['estimator'].predict(sentence_x)

        stat[y_predict[0]] += 1

    return stat

Step 5: Putting it all together

That is it.

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
import tweepy


negative = [
    "Protesters who were marching from Minneapolis to St. Paul were tear gassed by police as they tried to cross the Lake Street Marshall Bridge ",
    "The National Guard has been activated in Washington, D.C. to assist police handling protests around the White House",
    "Police have been firing tear gas at the protesters near the 5th Precinct in Minneapolis, where some in the crowd have responded with projectiles of their own",
    "Texas and Colorado have activated the National Guard respond to protests",
    "The mayor of Rochester, New York, has declared a state of emergency and ordered a curfew from 9 p.m. Saturday to 7 a.m. Sunday",
    "Cleveland, Ohio, has enacted a curfew that will go into effect at 8 p.m. Saturday and last through 8 a.m. Sunday",
    "A police car appears to be on fire in Los Angeles. Police officers are holding back a line of demonstrators to prevent them from getting close to the car."
            ]

positive = [
    "Two NASA astronauts make history with their successful launch into space aboard a SpaceX rocket",
    "After questionable weather, officials give the all clear for the SpaceX launch",
    "NASA astronauts Bob Behnken and Doug Hurley climb aboard SpaceX's Crew Dragon spacecraft as they prepare for a mission to the International Space Station",
    "New York Gov. Andrew Cuomo signs a bill giving death benefits to families of frontline workers who died battling the coronavirus pandemic"
]


def prepare_data(positive, negative):
    data = positive + negative
    target = [0]*len(positive) + [1]*len(negative)
    return {'data': data, 'target': target}


def train_data_set(data_set):
    x_train, x_test, y_train, y_test = train_test_split(data_set['data'], data_set['target'])

    transfer = TfidfVectorizer()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    estimator = MultinomialNB()
    estimator.fit(x_train, y_train)

    score = estimator.score(x_test, y_test)
    print("score:\n", score)
    return {'transfer': transfer, 'estimator': estimator}


def setup_twitter():
    consumer_key = "REPLACE WITH YOUR KEY"
    consumer_secret = "REPLACE WITH YOUR SECRET"
    access_token = "REPLACE WITH YOUR TOKEN"
    access_token_secret = "REPLACE WITH YOUR TOKEN SECRET"

    # authentication of consumer key and secret
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)

    # authentication of access token and secret
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)
    return api


def mood_on_cnn(api, predictor):
    stat = [0, 0]
    for status in tweepy.Cursor(api.user_timeline, screen_name='@cnnbrk', tweet_mode="extended").items():
        sentence_x = predictor['transfer'].transform([status.full_text])
        y_predict = predictor['estimator'].predict(sentence_x)

        stat[y_predict[0]] += 1

    return stat


data_set = prepare_data(positive, negative)
predictor = train_data_set(data_set)

api = setup_twitter()
stat = mood_on_cnn(api, predictor)

print(stat)
print("Mood (0 good, 1 bad)", stat[1]/(stat[0] + stat[1]))

I got the following output on the day of writing this tutorial.

score:
 1.0
[751, 2455]
Mood (0 good, 1 bad) 0.765751715533375

I found that the breaking news items are quite negative in taste. Hence, it seems to predict that.

How to Reformat a Text File in Python

The input file and the desired output

The task is to reformat the following input format.

Computing
“I do not fear computers. I fear lack of them.”
— Isaac Asimov

“A computer once beat me at chess, but it was no match for me at kick boxing.”
— Emo Philips

“Computer Science is no more about computers than astronomy is about telescopes.”
— Edsger W. Dijkstra

To the following output format.

“I do not fear computers. I fear lack of them.” (Isaac Asimov)
“A computer once beat me at chess, but it was no match for me at kick boxing.” (Emo Philips)
“Computer Science is no more about computers than astronomy is about telescopes.” (Edsger W. Dijkstra)

The Python code doing the job

The following simple code could do the reformatting in less than a second for a file that contained multiple hundreds quotes.

file = open("input")
content = file.readlines()
file.close()

lines = []
next_line = ""
for line in content:
    line = line.strip()
    if len(line) > 0 and len(line.split()) > 1:
        if line[0] == '“':
            next_line = line
        elif line[0] == '—':
            next_line += " (" + line[2:] + ")"
            lines.append(next_line)
            next_line = ""


file = open("output", "w")
for line in lines:
    file.write(line + "\n")
file.close()

How to Fetch CNN Breaking Tweets and Make Simple Statistics Automated with Python

What will we cover

  • We will use the tweepy library
  • Read the newest tweets from CNN Breaking
  • Make simple word statistics on the news tweets
  • See if we can learn anything from it

Preliminaries

The Code that does the magic

import tweepy

# personal details insert your key, secret, token and token_secret here
consumer_key = ""
consumer_secret = ""
access_token = ""
access_token_secret = ""

# authentication of consumer key and secret
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)

# authentication of access token and secret
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

# Creation of the actual interface, using authentication
api = tweepy.API(auth)

# Use a dictionary to count the appearances of words
stat = {}

# Read the tweets from @cnnbrk and make the statistics
for status in tweepy.Cursor(api.user_timeline, screen_name='@cnnbrk', tweet_mode="extended").items():
    for word in status.full_text.split():
        if word in stat:
            stat[word] += 1
        else:
            stat[word] = 1

# Let's just print the top 10
top = 10

# Let us sort them on the value in reverse order to get the highest first
for word in sorted(stat, key=stat.get, reverse=True):
    # leave out all the small words
    if len(word) > 6:
        print(word, stat[word])
        top -= 1
        if top < 0:
            break

The result of the above (done May 30th, 2020)

coronavirus 441
@CNNPolitics: 439
President 380
updates: 290
impeachment 148
officials 130
according 100
Trump's 98
Democratic 96
against 88
Department 83

The coronavirus is still the most breaking subject of today.

Next steps

  • It should be extended to have a more intelligent interpretation of the data.

Understand the Password Validation in Mac in 3 Steps – Implement the Validation in Python

What will you learn?

  • The password validation process in Mac
  • How to extract the password validation values
  • Implementing the check in Python
  • Understand why the values are as they are
  • The importance of using a salt value with the password
  • Learn why the hash function is iterated multiple times

The Mac password validation process

Every time you log into your Mac it needs to verify that you used the correct password before giving you access.

The validation process reads hash, salt and iteration values from storage and uses them to validate your password.

The 3 steps below helps you to locate your values and how the validation process is done.

Step 1: Locating and extracting the hash, salt and iteration values

You need to use a terminal to extract the values. By using the following command you should get it printed in a readable way.

sudo defaults read /var/db/dslocal/nodes/Default/users/<username>.plist ShadowHashData | tr -dc 0-9a-f | xxd -r -p | plutil -convert xml1 - -o -

Where you need to exchange <username> with your actual user name. The command will prompt you for admin password.

This should result in an output similar to this.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>SALTED-SHA512-PBKDF2</key>
	<dict>
		<key>entropy</key>
		<data>
                1meJW2W6Zugz3rKm/n0yysV+5kvTccA7EuGejmyIX8X/MFoPxmmbCf3BE62h
                6wGyWk/TXR7pvXKg\njrWjZyI+Fc3aKfv1LNQ0/Qrod3lVJcWd9V6Ygt+MYU
                8Eptv3uwDcYf6Z5UuF+Hg67rpoDAWhJrC1\nPEfL3vcN7IoBqC5NkIU=
		</data>
		<key>iterations</key>
		<integer>45454</integer>
		<key>salt</key>
		<data>
		6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=
		</data>
	</dict>
</dict>
</plist>

Step 2: Understand the output

The output consists of four pieces.

  • Key value: SALTED-SHA512-PBKDF2
  • Entropy: Base64 encoded data.
  • Number of iteration: 45454
  • Salt: Base64 encoded data

The Key value is the tells you which algorithm is used (SHA512) and how it is used (PBKDF2).

The entropy is the actual result of the validation algorithm determined by the key value . This “value” is not an encryption of the password, which means you cannot recover the password from that value, but you can validate if the password matches this value.

Confused? I know. But you will understand when we implement the solution

The number of iterations, here 45454, is the number of times the hash function is called. Also, why would you call the hash function multiple times? Follow along and you will see.

Finally, we have the salt value. That is to ensure that you cannot determine the password from the entropy value itself. This will also get explained with example below.

Step 3: Validating the password with Python

Before we explain the above, we need to be have Python do the check of the password.

import hashlib
import base64

iterations = 45454
salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode())
password = "password".encode()

value = hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128)
print(base64.b64encode(value))

Which will generate the following output

b'1meJW2W6Zugz3rKm/n0yysV+5kvTccA7EuGejmyIX8X/MFoPxmmbCf3BE62h6wGyWk/TXR7pvXKgjrWjZyI+Fc3aKfv1LNQ0/Qrod3lVJcWd9V6Ygt+MYU8Eptv3uwDcYf6Z5UuF+Hg67rpoDAWhJrC1PEfL3vcN7IoBqC5NkIU='

That matches the entropy content of the file.

So what happened in the above Python code?

We use the hashlib library to do all the work for us. It takes the algorithm (sha512), the password (Yes, I used the password ‘password’ in this example, you should not actually use that for anything you want to keep secret from the public), the salt and the number of iterations.

Now we are ready to explore the questions.

Why use a Hash value and not an encryption of the password?

If the password was encrypted, then an admin on your network would be able to decrypt it and misuse it.

Hence, to keep it safe from that, an iterated hash value of your password is used.

A hash function is a one-way function that can map any input to a fixed sized output. A hash function will have these important properties in regards to passwords.

  • It will always map the same input to the same output. Hence, your password will always be mapped to the same value.
  • A small change in the input will give a big change in output. Hence, if you change one character in the password (say, from ‘password’ to ‘passward’) the hash value will be totally different.
  • It is not easy to find the given input to a hash value. Hence, it is not easily feasible to find your password given the hash value.

Why use multiple iterations of the hash function?

To slow it down.

Basically, the way your find passwords is by trying all possibilities. You try ‘a’ and map it to check if that gives the password. Then you try ‘b’ and see.

If that process is slow, you decrease the odds of someone finding your password.

To demonstrate this we can use the cProfile library to investigate the difference in run-time. First let us try it with the 45454 iterations in the hash function.

import hashlib
import base64
import cProfile


def crack_password(entropy, iterations, salt):
    alphabet = "abcdefghijklmnopqrtsuvwxyz"
    for c1 in alphabet:
        for c2 in alphabet:
            password = str.encode(c1 + c2)
            value = base64.b64encode(hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128))
            if value == entropy:
                return password


entropy = "kRqabDBsvkyAhpzzVWJtdqbtqgkgNPwr5gqWG6jvw73hxc7CCvC4E33WyR5bxKmAXG5vAG9/ue+DC7BYLHRfOTE/dLKSMdpE9RFH7ZlTp7GHdH5b5vaqQCcKlXAwkky786zvpucDIgGGTOyw6kKB5hqIXLX9chDvcPQksVrjmUs=".encode()
iterations = 45454
salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode())

cProfile.run("crack_password(entropy, iterations, salt)")

This results in a run time of.

        1    0.011    0.011   58.883   58.883 ShadowFile.py:6(crack_password)

About 1 minute.

If we change the number of iterations to 1.

import hashlib
import base64
import cProfile


def crack_password(entropy, iterations, salt):
    alphabet = "abcdefghijklmnopqrtsuvwxyz"
    for c1 in alphabet:
        for c2 in alphabet:
            password = str.encode(c1 + c2)
            value = base64.b64encode(hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128))
            if value == entropy:
                return password


entropy = "kRqabDBsvkyAhpzzVWJtdqbtqgkgNPwr5gqWG6jvw73hxc7CCvC4E33WyR5bxKmAXG5vAG9/ue+DC7BYLHRfOTE/dLKSMdpE9RFH7ZlTp7GHdH5b5vaqQCcKlXAwkky786zvpucDIgGGTOyw6kKB5hqIXLX9chDvcPQksVrjmUs=".encode()
iterations = 1
salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode())

cProfile.run("crack_password(entropy, iterations, salt)")

I guess you are not surprised it takes less than 1 second.

        1    0.002    0.002    0.010    0.010 ShadowFile.py:6(crack_password)

Hence, you can check way more passwords if only iterated 1 time.

Why use a Salt?

This is interesting.

Well, say that another user used the password ‘password’ and there was no salt.

import hashlib
import base64

iterations = 45454
salt = base64.b64decode("".encode())
password = "password".encode()

value = hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128)
print(base64.b64encode(value))
b'kRqabDBsvkyAhpzzVWJtdqbtqgkgNPwr5gqWG6jvw73hxc7CCvC4E33WyR5bxKmAXG5vAG9/ue+DC7BYLHRfOTE/dLKSMdpE9RFH7ZlTp7GHdH5b5vaqQCcKlXAwkky786zvpucDIgGGTOyw6kKB5hqIXLX9chDvcPQksVrjmUs='

Then you would get the same hash value.

Hence, for each user password, there is a new random salt used.

How to proceed from here?

If you want to crack passwords, then I would recommend you use Hashcat.

How Caesar Cipher Teaches us the Most Valuable Lesson – Learn Kerckhoff’s Principle in 5 Steps with Python Code

What will we cover?

  • Understand the challenge to send a secret message
  • Understand the Caesar Cipher
  • How to create an implementation of that in Python
  • How to break the Caesar Cipher
  • Understand the importance of Kerckhoff’s Principle

Step 1: Understand the challenge to send a secret message

In cryptography you have three people involved in almost any scenario. We have Alice that wants to send a message to Bob. But Alice want to send it in a way, such that she ensures that Eve (the evil person) cannot understand it.

But let’s break with tradition and introduce an addition person, Mike. Mike is the messenger. Because we are back in the times of Caesar. Alice represent one of Caesar close generals that needs to send a message to the front lines of the army. Bob is in the front line and waits for a command from Alice. DO ATTACK or NO ATTACK.

Alice will use Mike, the messenger, to send that message to Bob.

Alice is of course afraid of that Eve, the evil enemy, will capture Mike along the way.

Of course, as Alice is smart, she knows that Mike should not understand the message he is delivering, and Eve should not be able to understand it as well. It should only add value to Bob, when Mike gives him the message.

That is the problem that Caesar wanted to solve with his cipher system.

Step 2: Understand the Caesar Cipher

Let’s do this a bit backwards.

You receive the message. BRX DUH DZHVRPH

That is pretty impossible to understand. But if you were told that this is the Caesar Cipher using the shift of 3 characters. Then maybe it makes sense.

As you can see, then green letters are the plaintext characters and the red letters are the encrypted cipher text letters. Hence, A will be a D. That is the letter A is shifted 3 characters down the row.

Reversing this, you see the the encrypted B, will map to the plaintext Y.

If you continue this process you will get.

That is a nice message to get.

Step 3: How to create an implementation of that in Python

Well, that is easy. There are many ways to do it. I will make use of the dictionary to make my life easy.

def generate_key(n):
    letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    key = {}
    cnt = 0
    for c in letters:
        key[ c] = letters[(cnt + n) % len(letters)]
        cnt += 1
    return key


def get_decryption_key(key):
    dkey = {}
    for c in key:
        dkey[key[ c]] = c
    return dkey

    
def encrypt(key, message):
    cipher = ""
    for c in message:
        if c in key:
            cipher += key[ c]
        else:
            cipher += c
    return cipher


# This is setting up your Caesar Cipher key
key = generate_key(3)
# Hmm... I guess this will print the key
print(key)
# This will encrypt the message you have chose with your key
message = "YOU ARE AWESOME"
cipher = encrypt(key, message)
# I guess we should print out your AWESOME message
print(cipher)

Step 4: How to break the Caesar Cipher

If you look at it like this. There is a flaw in the system. Can you see what?

Yes, of course you can. We are in the 2020ies and not back in the times of Caesar.

The key space is too small.

Breaking it basically takes the following code.

# this is us breaking the cipher
print(cipher)
for i in range(26):
    dkey = generate_key(i)
    message = encrypt(dkey, cipher)
    print(message)

You read the code correct. There are only 26 keys. That means, that even back in the days of Caesar this could be done in hand.

This leads us to the most valuable lesson in cryptography and most important principle.

Step 5: Understand the importance of Kerckhoff’s Principle

Let’s just recap what happened here.

Alice sent a message to Bob that Eve captured. Eve did not understand it.

But the reason why Eve did not understand it, was not because she did not have the key.

No, if she knew the algorithm.

Yes, if Eve knew the algorithm of Caesar Cipher, she would not need the secret key to break it.

This leads to the most important lesson in cryptography. Kerckhoff’s Principle.

Eve should not be able to break the ciphers even when she knows the cipher.

Kerckhoff’s Principle

That is seems counterintuitive, right? Yes, but think about it, if you system is secure against any attack even if you reveal your algorithm, then it would give you more confidence that it is secure.

You security should not be based on keeping the algorithm secret. No it should be based on the secret key.

Is that principle followed?

No.

Most government ciphers are kept secret.

Many secret encryption algorithms that leaked were broken.

This also includes the one used for mobile traffic in the old G2 network. A5/1 and the export version A5/2.

Learn the Basics in PyCharm – How to Program as a Professional with Python

What is PyCharm?

PyCharm is an integrated development environment (IDE) used in computer programming, specifically for the Python language.

Learn more about it here. Where to download it?

Is it free? New to Python?

Get Started in PyCharm and Create Your First Program in less than 5 Minutes

How do you start in PyCharm? Create a project? What is that? How get from first start to running your first program in PyCharm. Want to learn more about Python?

Learn the Basics in PyCharm Debugger in 6 Minutes

In this video we are going to learn the basics in the PyCharm Debugger.

There are a lot of nice things you can do. But basically you just need a small percentage of those in order to get started. Follow me in a simple walk through debugging a Python program.

Want to learn more about debugging? Debugging is one of those tasks you hate and love. You hate when your program doesn’t do as you expect. But you love when you figure out why.

A debugger helps you in getting from HATE to LOVE.

New to Python and Programming? Check out the online course below.

Check out my Beginners Level Course on Python

Queue vs Python list – Comparing the Performance – Can a simple Queue beat the default Python list?

How to profile a program in Python

In this video we will see how cProfile (default Python library) can help you to get run-times from your Python program.

Queue vs Python lists

In this video we will compare the performance of a simple Queue implemented directly into Python (no optimisations) with the default Python list.

Can it compare with it on performance?

This is where time complexity analysis come into the picture. A Queue insert and deletion is O(1) time complexity. A Python list used as a queue has O(n) time complexity.

But does the performance and run-time show the same? Here we compare the run-time by using cProfile in Python.

Want to learn more about Linked-lists, Stacks and Queues?

Check out my Course on Linked Lists, Stacks and Queues

Find the Nearest Smaller Element on Left Side in an Array – Understand the Challenge to Solve it Efficiently

The Nearest Smaller Element problem explained:

Given an array (that is a list) of integers, for each element find all the nearest element smaller on the left side of it.

The naive solution has time complexity O(n^2). Can you solve it in O(n)? Well, you need to have a Stack to do that.

The naive solution is for each element to check all the elements on the left of it, to find the first one which is smaller.

The worst case run time for that would be O(n^2). For an array of length n, it would take: 0 + 1 + 2 + 3 + … + (n-1) comparisons. = (n-1)*n/2 = O(n^2) comparisons.

But with a stack we can improve that.

Want to learn more about Stacks?

Check out my Course on Linked Lists, Stacks and Queues