2019 May 07¶
Location: Coors Tech¶
Attending¶
Jihyun Andy Thomas Bin Iga
Key takeaways¶
- We discussed the SEG abstract that interests Khalid (Wu McMechan, 2018)
- Novel aspect: updating weights of CNN in each iteration of inversion
- Abstract mixes geophysics and machine learning terminology in confusing ways
- Unsure how or why CNN improves inversion
Problems¶
- Example uses a really good initial model - does the algorithm work with a poor initial model?
- So many layers! 26! Does the CNN need to be so big?
- Abstract has few details
Comments¶
- Train a CNN to input a vector of ones and output an initial velocity model
- 26 hidden layers
- For FWI where forward modeling data given velocity model is d(v) and gradient w.r.t. weights is g, the objective function to be minimized is C(v) = ||d(v)-d_observed||^2
- If CNN network is G(w)=v where w are weights of the CNN and v is the velocity model, then the objective function becomes C(w) = ||d(G(w))-d_observed||^2
- Minimize the new objective function by adjusting the weights w of the CNN
- Paper under review
Topics of discussion for next week¶
- Further discussion on this abstract
- Thomas’s TasNet usage