Transfer Learning:
Transferring or using the learnt parameters (weights, bias) of a pre-trained network to a new task. Usually in the new task, we keep the network's layers and the learned parameters of the pre-trained network unchanged and we modify the last few layers (e.g. Fully connected layer, Classification layer) which depends upon the application.
Fine tuning
Fine tuning is like optimization. We optimize the network to achieve the optimal results. May be we can change the number of layers used, no of filters, learning rate and we have many parameters of the model to optimize.
Learning from scratch
Learning from scratch means building a network like AlexNet, GoogleNet etc. of your own. Every pre-trained network like AlexNet has some fixed or standard number of layers. If you build a network from scratch, you should decide the no of layers, no of filters etc. You should build a model that produces the maximum result for your application. Like trail and error method you should keep on working with different layers, increasing or decreasing the no of layers, filters, deciding about which normalization technique to use and deciding which activation function to use etc.
You could read my survey paper, where i have discussed about different deep learning architectures and techniques that have been used for Medical domain applications.