Went for a nice bike ride this morning. Ate and now time to start to work and think and write. Will do a bit of private journaling this morning.
I spent too much time in the shower. Just comfortable and thinking. Probably should change this to be mediation instead of just standing there wasting resources.
Have my PT appointment at 10:30. Getting dinner with Jon tonight.
So I have been thinking while driving. Basically want to see what is the route in front of me or how to get to it. Originally I was going to use drone mapping to build a 3D model. This might be cool in the future but is probably overkill for now. I think the best thing to do is make it a dead simple AR application.
I am thinking. I am at the crag, I open the app and I point it at the rock and it should tell me what the route in front of me is. Ideally it should also place a line for the route direction.
Got cleared for everything as expected. Obviously the right knee is not perfect yet but it’s damn good.
So spent most of the day playing around with AR Kit. I am able to add entities to the AR Scene which is nice. There’s still so much more to do though. We need to check to see if this is even possible when at a crag. Will I have to rely on other techniques as provided by AR Kit? I am not sure. Again the problem is iOS vs. Android here too. But the thing is the documentation for iOS is good.
Would like to test the Entities at the crag itself. Let’s see if the text shows up lol. Will have to see if we can get a geo calibration or not. Might not allow for high accuracy. The problem will also be that if not we will have to figure out another way to do this. Might force us to move away from ARKit and to something more generic. We might have to do a lot of legwork to make this work properly.
I am thinking at the minimum taking compass and GPS location, but the amount of error induced by GPS can be very large. So this means we would want to have a ground model of the routes (at minimum photos) then do matching on these photos to accurately place the routes. Would really need photos from multiple angles as well. This means a lot of data collection. May need to run a ML model to do this properly. I think there is still much to figure out and re-evaluate as things move.