The primer is great, and a quick read. Here is my quick summary below:
The basics of deep learning is to think about how the brain breaks up a specific task. For example, let’s say you are hiking the Appalachian Trail, and you see something in the distance running towards you. First, you might notice it is moving. Then, you might notice what shape it is. Then, you might notice how fast it is going. Then, you might notice a big snout. Then, your brain will determine that this is an animal.
The process would continue until your brain evaluated, classified and predicts what object it is seeing. The joy of the mental exercise (for me) is to understand how the human mind works to break down ideas.
Inputs > Algorithm > Prediction > Training:
The following are the key concepts for thinking about deep learning concepts. Yes, this is overly simplified, but it is still a helpful start.
Levels of Abstraction 1: Is this a shape?
Level of Abstraction 2: Is this shape an ear?
Level of Abstraction 3: Is this a cat?
Prediction = Yes or No. Is this prediction correct?
Current-State of Deep Learning:
Supervised Deep Learning: In effect, this is attempting to clone human behavior via labeled images, video, text or speech.
Reinforcement Learning: This is where the model attempts to “learn” behaviors, codify those behaviors (i.e. what does that mean), and then implement strategies to optimize based on those strategies. As the article suggests, the following are some examples:
E-Commerce: model learns customer behaviors and tailors service to suit customer interests.
Finance: model learns market behavior and generates trading strategies.
Robots: model learns how physical world behaves (through video) and then navigates that world.
Network Architecture to Detect Objects in Images:
Extract Feature: Extract the specific features
Classification: Classify based on the probability of those features
At the heart of getting something done is getting everyone on the same page to move the project forward. This is especially relevant in an environment of a variety of individuals from department teams, and, ultimately, different walks life.
The diversity of views creates a challenge: Do we argue about the rightness or wrongness of ideas or a decision? Or, do we agree on a set of “guiding principles” and make decisions?
What is a guiding principle?
A guiding principle is a statement that summarizes a criteria or value-based mechanism. Let’s take the following situation in pricing strategy:
Situation: Company X has developed a patented Water Retention System that helps trees grow faster, while using 80% less water.
Complication: The founder and owner of Company X wants to target low-income farmers that cannot afford an expensive system. The venture capitalists wants the founder to charge higher rates to ensure maximum distribution, and ultimately a strong return on their investment.
Question: How should Company X price the Water Retention System?
If you were in this situation, how would you facilitate a decision? Clearly, both the owner and the venture capitalists have a strong case to make regarding the rightness of their decision.
Option A: Conduct a pricing analysis, and present different prices and see if there is a price that meets both needs.
Option B: Develop a core set of guiding principles around making key business decisions, and then conduct a pricing analysis, and evaluate the options based on the guiding principles.
In Option A, there is an implicit debate about what the Owner and the Venture Capitalists value. In Option B, there is an explicit debate about what the Owner and the Venture Capitalists value.
Why is this important?
The point of this example is codifying the unsaid in guiding principles, each individual can evaluate what they value and see whether it resonates.
If there is resonance, then decision making and team dynamics can be more fluid (or, at the leaser — easier).
If there is not resonance, then decision making will be stalled and inauthentic — team members may grudgingly go along, but there will continue to be dissension as increasingly complex decisions are made, and the team will need to decide whether to continue together or not.
Originally posted on Karma Advisory’s medium page here.
“In the case of CheXnet, the research team led by Stanford adjunct professor Andrew Ng, started by training the neural network with 112,120 chest X-ray images that were previously manually labeled with up to 14 different diseases. One of them was pneumonia. After training it for a month, the software beat previous computer-based methods to detect this type of infection. The Stanford Machine Learning Group team pitted its software against four Stanford radiologists, giving each of them 420 X-ray images. This graphic shows how the radiologists–represented by the orange Xs–did compared to the program–represented by the blue curve.”