Difference makes the DIFFERENCE
Link: https://padhai.onefourthlabs.in/
https://www.youtube.com/channel/
UC337_KN0yuNvzeZnlstEa_A/featured
Link to install NumPy: NumPy is the fundamental package for scientific computing with Python. It contains among other things: a powerful N-dimensional array object; sophisticated ...
at c:/> pip install -U pip (pip is upgraded, if not already upgraded)
at C:/> pip install -U numpy
NumPy gets installed. This needs internet connection.
Link to Source: https://docs.scipy.org/doc/numpy/
user/quickstart.html
https://angel.co/projects/518942-human-activity-recognition-machine-learning
http://robots.ox.ac.uk/~minhhoai/papers/
SegReg_CVPR11.pdf (I couldnot download this link, if you can, please send me a copy of the same)
In general if we do not use sns.set(), the background would be plain while displaying the graph.
For a better visual understanding we use sns.set().
https://seaborn.pydata.org/tutorial/aesthetics.html
An Introduction to Statistical Learning -
http://www-bcf.usc.edu/~gareth/ISL/
Google Machine Learning Course -
https://developers.google.com/machine-learning/crash-course/
Essentials of Metaheuristics -
https://cs.gmu.edu/~sean/book/
metaheuristics/Essentials.pdf
(primarily for my personal use...)
basic source: PadhAI Course-ware - Chat Group
Various other sources on Python, AI, DL and ML (Artificial Intelligence, Deep Learning and Machine Learning)
“A Simple Neural Network from Scratch with PyTorch and Google Colab” by Elvis https://link.medium.com/i5BpfxQ4KT
https://pytorch.org/tutorials/beginner/nlp/
pytorch_tutorial.html
https://airtable.com/invite/r/nfHfFokR
Airtable: Organize anything you can imagine
Airtable works like a spreadsheet but gives you the power of a database to organize anything. Sign up for free.
The Resources page on this website contains a list of hand-selected goodies that are regularly recommended to both beginners and experts. https://pythondiscord.com/info/resources
Try to understand how these tutorials work https://www.tensorflow.org/tutorials
Run them with google colab
Udacity Data Science Mega Link if anyone needs
https://mega.nz/#F!qrpxSIRD!PClG5ZMHdd5FroIFTT_Z5Q
Codecademy is another way to learn how to code. It's interactive, fun, and you can do it with your friends.https://www.codecademy.com/catalog/
language/python
Best Books And Sites For Machine Learning
How to get started in machine learning? Python is here because if you are new to machine learning and new to programming then Python would be a really good choice. But really machine learning
https://www.houseofbots.com/news-detail/4685-1-best-books-and-sites-for-machine-learning
Udemy -> free course
Introduction to PyTorch and Machine Learning | Udemy
Learn the basics of ML & PyTorch - Free Course
https://www.udemy.com/intro-to-pytorch-and-machine-learning/
SoloLearn: Learn to Code - This is a website based working portal, where each next module is unlocked only if the current work is completed with a quiz or multiple choice question. - So one is forced to learn in a sequence
Join Now to learn the basics or advance your existing skills. Link: https://www.sololearn.com/
This website presents a series of lectures on quantitative economic modeling, designed and written by Thomas J. Sargent and John Stachurski.
Link: https://lectures.quantecon.org/py/
Recommended Python learning resources:
https://forums.fast.ai/t/recommended-python-learning-resources/26888
Unofficial Windows Binaries for Python Extension Packages.
MIT Machine Learning and Statistics -
https://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/index.htm
MIT Linear Algebra Course -
https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
If this has been useful, then consider giving your support by buying me a coffee
https://ko-fi.com/pythonprogrammer
Book on Amazon Link: https://amzn.to/2ImYVhC
https://www.coursera.org/specializations/mathematics-machine-learning
In general if we do not use sns.set(), the background would be plain while displaying the graph.
For a better visual understanding we use sns.set().
https://seaborn.pydata.org/tutorial/aesthetics.html
Artificial Intelligence - A Modern Approach - https://amzn.to/2Ip0Wdi (book)
An Introduction to Statistical Learning -
http://www-bcf.usc.edu/~gareth/ISL/
Breast cancer classification with Keras and Deep Learning - PyImageSearch
https://codequs.com/p/BkaLEq8r4/a-complete-machine-learning-project-walk-through-in-python
https://www.pyimagesearch.com/2019/02/18/breast-cancer-classification-with-keras-and-deep-learning/
This website presents a series of lectures on quantitative economic modeling, designed and written by Thomas J. Sargent and John Stachurski.
https://forums.fast.ai/t/recommended-python-learning-resources/26888
Guys this might be helpful for future purpose. https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463
Anyone who want to open Jupyter notebook (.ipynb) file offline in PC, first you need Jupyter library file. You can get it by typing
pip install --upgrade pip
pip install jupyter
Or if you are using anaconda/Miniconda you don't need to install anything.
For remaining steps see this video
https://youtu.be/esKiryoBJBI
Cheat Sheet of Machine Learning and Python (and Math) Cheat Sheets
If you like this article, check out another by Robbie: My Curated List of AI and Machine Learning Resources
https://in.udacity.com/course/ai-artificial-intelligence-nanodegree--nd898
Artificial Intelligence Course | Udacity
Learn essentials of machine learning with from experts like Norvig and Sebastian
Demystifying Entropy
https://towardsdatascience.com/demystifying-entropy-f2c3221e2550
The research paper is all about implementation of our project including extraction of text region..
https://www.researchgate.net/publication/311578437
_TextNon-text_Image_Classification_in_the_Wild_with_
Convolutional_Neural_Networks
In many cases, the information given below the video (under the show-more button is more useful than the video itself, so, please have a careful view for these contents as well.
YouTubeHarvard University
Lecture 1: Probability and Counting | Statistics 110
In course of time, some of the links may not be functional...,
if you find any such links, please do let me know with correct adderss, so that the other person may have a valid link.
Data Visualization Society Launches
A new community for data visualization professionals has launched. It is called the Data Visualization Society.
https://101.datascience.community/2019/03/02/data-visualization-society-launches/my publication on Medium website (as per course)
YouTube Publication:
Last Update Date: 17 Feb 2022
Question:
Consider a message A which can take on 16 values. The probabilities of each of these 16 values are 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/128. You made an error in estimating these probabilities and came up with the following estimation for the 16 values: 1/128, 1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 1/2 (in other words, you estimated highest probability for the least possible message and so on. What is the KL divergence between the true distribution and your estimated distribution (choose the closest rounded off number)?
Answer:
True Entropy
H1=-((1/2)*math.log2(1/2)+(1/4)*math.log2(1/4)+(1/8)*math.log2(1/8)+
(1/16)*math.log2(1/16)+(1/32)*math.log2(1/32)+
(1/64)*math.log2(1/64)+(1/128)*math.log2(1/128)+(1/128)*math.log2(1/128))
Cross Entropy
H2=-((1/2)*math.log2(1/128)+(1/4)*math.log2(1/128)+(1/8)*math.log2(1/64)+
(1/16)*math.log2(1/32)+(1/32)*math.log2(1/16)+(1/64)*math.log2(1/8)+
(1/128)*math.log2(1/4)+(1/128)*math.log2(1/2))
KL Divergence
H2-H1=4.5234(approx=4.5)
Question:
Consider the message A which can take on one of 8 values. The probability of each of these values is 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/128. What is the ideal number of bits that you should use to transmit this message (choose the closest rounded off number)?
Answer:
Consider the distribution table below-
X IC(# Bits)
1/2 1
1/4 2
1/8 3
1/16 4
1/32 5
1/64 6
1/128 7
1/128 7
So,Ideal number of bits= Expected Number of bits=
1/2*1+ 1/4*2 + 1/8*3 + 1/16*4 + 1/32*5 + 1/64*6 + 1/128*7 + 1/128*7 = 1.98 (rounding off) -->2
Question:
Ramesh is supposed to report to office at 9 am every morning. In the past 30 days, he was ether late by 10 minutes or 15 minutes or he came 5 minutes before time or exactly on time. More specifically, he was on time on 4 days, late by 10 minutes on 10 days, late by 15 minutes on 5 days and 5 minutes before time on 11 days. What is the expected time at which he will reach office today ?
Answer:
Sir while explaining has actually missed dividing the # of occurrences ( in this example its 30 ) to find expectation.
9:00 AM x 4 days = 0 mins delay
9:10 AM x 10 days = 100 mins delay
9:15 AM x 5 days = 75 mins delay
8:55 AM x 11 days = - 55 mins delay
= 0 mins + 1000 mins + 75 mins late - 55 mins early
= 120 mins
Hence 120 mins /30 days = 4 mins / day
So 9:00 AM to 4 mins today hence 9:04 AM
Question:
In the past 1024 days, the Cruciatus curse was used once at Hogwarts. Now consider a random variable X which will take on the value 1 if the Cruciatus curse is used and the value 0 if it is not used. What is the Information Content of the even X=1
Answer:
- log(P(x)) is the formula
P(x) = 1 / 1024
So, substituting P(x) in the formula,
-log(1/1024) = -log (1024 ^ -1) = - (-1) * log(1024) = log(2^10)
We know, log base 2 of 2 is 1, and log(m^n) = n log m
then, the ans is 10.
Question:
Consider a discrete random variable which can take one of n possible values. State whether True or False: The entropy of this random variable is maximum when all the n values are equally likely.
Answer: 1:
If we put more probability mass into one event of a random variable, we will have to take away some from other events. This one will have less information content and more weight, the others more information content and less weight. Therefore the entropy being the expected information content will go down since the event with lower information content will be weighted more.
As an extreme case imagine one event getting a probability of almost one, therefore the other events will have a combined probability of almost zero and the entropy will be very low.
Answer: 2:
Think about it this way, if all events are equally probable, then you don't know which event is most likely to happen, hence entropy is maximum. If probability of one event is higher, then you can be more sure that this event is likely to happen. Entropy can be understood as the degree of confusion that you have about what will happen
If you find any useful links and can share,
please post them, here, I will, include to the above list.