import ssl
= ssl._create_unverified_context
ssl._create_default_https_context
!unset http_proxy
!unset https_proxy
I trained this cool image classifier using the fastai library to mess with my friend who we both agree is bewakoof with directions.
The provided code snippet aims to resolve potential internet connectivity issues that may arise when using an internal network with a proxy. code snippet above is for bypassing internet issue that I got when using a internal network with proxy.
We import the fastai library, which includes all the essential functions necessary to create the classifier.
from fastai.vision.all import *
Load data
The directory path containing all the images of me and my friend is the result of my effort to collect images from my phone gallery and manually crop them to obtain only the images containing me or my friend. I created two images from one by cropping out the face.
= Path("./bewakoof_detector/data/suchith_or_shivani/") PATH
The image of my friend, Shivani, is provided below, Figure 1. She is referred to as “bewakoof” (meaning silly or foolish in Hindi) with the direction.
= './bewakoof_detector/shivani.jpg'
path open(path).to_thumb(256, 256) Image.
The following image, Figure 2, is of myself, referred to as “smart and pro with directions” and the “cool guy” at IIT Delhi.
= './bewakoof_detector/suchith.jpg'
path open(path).to_thumb(256, 256) Image.
Data cleaning
I wrote a custom code to create a widget that could rotate and save the manually cropped images since they were not saved correctly, and a few of them were rotated by some degrees.
Code
from ipywidgets import widgets
from IPython.display import display
= get_image_files(PATH)
files
= widgets.Button(description="Next")
button_next = widgets.Button(description="Previous")
button_pre = widgets.Button(description="Rotate left", icon="rotate-left")
button_rotate_left = widgets.Button(description="Rotate right", icon="rotate-right")
button_rotate_right
= widgets.Output()
output
= -1
COUNTER = None
IMAGE = False
MODIFIED
def button_next_eventhandler(obj):
global IMAGE, COUNTER, MODIFIED
if MODIFIED:
IMAGE.save(files[COUNTER])= False
MODIFIED
+= 1
COUNTER
output.clear_output()
if COUNTER < len(files):
= Image.open(files[COUNTER])
IMAGE with output:
256, 256))
display(IMAGE.to_thumb(else:
with output:
"ERROR::Buffer overflow.")
display(
def button_rotate_left_eventhandler(obj):
global IMAGE, COUNTER, MODIFIED
output.clear_output()
if COUNTER > -1 and COUNTER < len(files):
= True
MODIFIED = IMAGE.rotate(90)
IMAGE with output:
256, 256))
display(IMAGE.to_thumb(else:
with output:
"ERROR::Invalid counter value.")
display(
def button_rotate_right_eventhandler(obj):
global IMAGE, COUNTER, MODIFIED
output.clear_output()
if COUNTER > -1 and COUNTER < len(files):
= True
MODIFIED = IMAGE.rotate(-90)
IMAGE with output:
256, 256))
display(IMAGE.to_thumb(else:
with output:
"ERROR::Invalid counter value.")
display(
def button_previous_eventhandler(obj):
global IMAGE, COUNTER, MODIFIED
if MODIFIED:
IMAGE.save(files[COUNTER])= False
MODIFIED
-= 1
COUNTER
output.clear_output()
if COUNTER > -1:
= Image.open(files[COUNTER])
IMAGE with output:
256, 256))
display(IMAGE.to_thumb(else:
with output:
"ERROR::Buffer underflow.")
display(
button_rotate_left.on_click(button_rotate_left_eventhandler)
button_rotate_right.on_click(button_rotate_right_eventhandler)
button_next.on_click(button_next_eventhandler)
button_pre.on_click(button_previous_eventhandler)
= widgets.Layout(margin="0 0 50px 0")
item_layout
= widgets.HBox([button_rotate_left, button_rotate_right, button_next, button_pre], layout=item_layout)
buttons
= widgets.Tab([output])
tab 0, 'Image')
tab.set_title(
= widgets.VBox([buttons, tab], layout=item_layout) dashboard
display(dashboard)
Dataloader
The dataloader below is fed into the model to learn the weights of the classifier.
= DataBlock(
dls =(ImageBlock, CategoryBlock),
blocks=get_image_files,
get_items=parent_label,
get_y=RandomSplitter(valid_pct=0.2, seed=42),
splitter=[Resize(224, method='crop')]
item_tfms ).dataloaders(PATH)
=9, figsize=(20,20)) dls.show_batch(max_n
Model
We will be fine-tuning the resnet34 model.
= vision_learner(dls, resnet34, metrics=error_rate)
learner 10) learner.fine_tune(
epoch | train_loss | valid_loss | error_rate | time |
---|---|---|---|---|
0 | 1.256843 | 0.918047 | 0.416667 | 00:12 |
epoch | train_loss | valid_loss | error_rate | time |
---|---|---|---|---|
0 | 0.876068 | 0.843697 | 0.444444 | 00:06 |
1 | 0.823288 | 0.790361 | 0.388889 | 00:05 |
2 | 0.749693 | 0.638904 | 0.305556 | 00:05 |
3 | 0.623945 | 0.448487 | 0.194444 | 00:05 |
4 | 0.526806 | 0.328966 | 0.083333 | 00:05 |
5 | 0.456535 | 0.236890 | 0.027778 | 00:05 |
6 | 0.388769 | 0.198798 | 0.027778 | 00:05 |
7 | 0.349373 | 0.187475 | 0.027778 | 00:05 |
8 | 0.307049 | 0.179472 | 0.027778 | 00:06 |
9 | 0.280856 | 0.182338 | 0.027778 | 00:05 |
The learned parameters of the model are saved here.
"./bewakoof_detector/suchith_or_shivani.pth") learner.export(
Prediction
We load the model to make predictions.
= load_learner("./bewakoof_detector/suchith_or_shivani.pth") learner
The function below employs the model to make predictions.
def predict(image):
= learner.predict(image)
label, _, probs = {'shivani': 'Bewakoof', 'suchith':'Smart'}
dict_map
256, 256))
display(image.to_thumb(print(f"Detector output : {dict_map[label]}")
print(f"The probability it is Shivani : {probs[0]}")
Finally, we provide a few predictions on the images using the trained model.
= PILImage.create('./bewakoof_detector/suchith_2.jpg')
suchith predict(suchith)
Detector output : Smart
The probability it is Shivani : 0.004921324085444212
Conclusion
I hope this pisses her off and officially establishes that she is “Bewakoof” with the direction. I have created a web app where you can upload your image to check if you bewakoof like her. Please feel free to try it out and share your thoughts.