Payjoin-cli should cache ohttp-keys for re-use#1035
Payjoin-cli should cache ohttp-keys for re-use#1035zealsham wants to merge 3 commits intopayjoin:masterfrom
Conversation
Pull Request Test Coverage Report for Build 23672911589Details
💛 - Coveralls |
27c936f to
8512a28
Compare
0b94117 to
3744b03
Compare
| relay_manager: Arc<Mutex<RelayManager>>, | ||
| ) -> Result<ValidatedOhttpKeys> { | ||
| println!("before first some"); | ||
| if let Some(ohttp_keys) = config.v2()?.ohttp_keys.clone() { |
There was a problem hiding this comment.
oops! , println statements i used for debugging got commited accidentally , cleaning it up
3744b03 to
0fe2229
Compare
|
@zealsham I am going to mark this as draft. Do you mind cleaning up the commit messages and history and re-opening. Thank you. |
971c8d3 to
2198e3f
Compare
|
@arminsabouri the PR is ready |
payjoin-cli/src/app/v2/ohttp.rs
Outdated
|
|
||
| // try cache for this selected relay first | ||
| if let Some(cached) = read_cached_ohttp_keys(&selected_relay) { | ||
| println!("using Cached keys for relay: {selected_relay}"); |
There was a problem hiding this comment.
Can we use the logger instead of println?
payjoin-cli/src/app/v2/ohttp.rs
Outdated
|
|
||
| // try cache for this selected relay first | ||
| if let Some(cached) = read_cached_ohttp_keys(&selected_relay) { | ||
| println!("using Cached keys for relay: {selected_relay}"); |
There was a problem hiding this comment.
| println!("using Cached keys for relay: {selected_relay}"); | |
| println!("using Cached keys for relay: {selected_relay}"); |
| // try cache for this selected relay first | ||
| if let Some(cached) = read_cached_ohttp_keys(&selected_relay) { | ||
| println!("using Cached keys for relay: {selected_relay}"); | ||
| if !is_expired(&cached) && cached.relay_url == selected_relay { |
There was a problem hiding this comment.
Why would read_cached_ohttp_keys return an expired or keys for a different relay? Perphaps read_cached_ohttp_keys should return ValidateOhttpKeys?
There was a problem hiding this comment.
Thier is a possiblity of the cached keys being expired by the time we try to use. so it checks that the keys being rend hasn't elapsed that duration before using it
There was a problem hiding this comment.
if the relay_url were the key the comparison would not be necessary, but instead it's just the host part of this key. i think that only causes collisions when different ports are used, see other comment
| fetched_at: u64, | ||
| } | ||
|
|
||
| fn get_cache_file(relay_url: &url::Url) -> PathBuf { |
There was a problem hiding this comment.
Why are cached keys not being persisted in the database?
There was a problem hiding this comment.
They are persisted to file system and then get replaced with new keys if the cached one expires. i think it's pretty more straight forward than adding it to the db. on the file system keys are stored per-relay, each relay can have one key cache at a time
There was a problem hiding this comment.
we already use sqlite so why not have a table with relay URL as the key and pubkey as the value? no need for serde, key mangling, or an additional bit of mutable storage to remember exists
There was a problem hiding this comment.
Another note: keys correspond to directories, not relays. Might just be a mistype but it's a concrete distinction I felt necessary to repeat for certainty.
There was a problem hiding this comment.
Another note: keys correspond to directories, not relays. Might just be a mistype but it's a concrete distinction I felt necessary to repeat for certainty.
this is very important
This pr adds the functionality of ohttp-keys catching to payjoin-cli , ohttp-keys should not be fetched each time and a cached key should be use. Keys expire in 6 months .
2198e3f to
2677446
Compare
nothingmuch
left a comment
There was a problem hiding this comment.
i think the cache TTL being hard coded to 6 months is a bit problematic. if a directory's keys are lost, this may render its URL useless for that period of time, giving operators no recourse.
instead we should probably use standard HTTP cache control headers, which would allow an operator with high confidence in their deployment to set more aggressive caching parameters.
rfc 9458 says (sections 5.2, 6.4) that a server must generate a response with a 422 status in response to an invalid message, which is what the server sees if it can't decapsulate because the client is using a stale pubkey.
i'm not sure what the implementation and/or testing for that is right now, but we should handle that by fetching the ohttp key and comparing it, erroring if there is an inconsistency but allowing opting in to just overwrite the old key with the new key (i.e. making such errors into warnings)
| // 6 months | ||
| const CACHE_DURATION: Duration = Duration::from_secs(6 * 30 * 24 * 60 * 60); |
There was a problem hiding this comment.
this is a pretty long duration to hard code. it seems more appropriate to allow OHTTP gateways to control this by using the cache control mechanisms defined in HTTP, so that operators can determine their own policies?
| dirs::cache_dir() | ||
| .unwrap() | ||
| .join("payjoin-cli") | ||
| .join(relay_url.host_str().unwrap()) |
There was a problem hiding this comment.
why not store relay_url in full so two gateways on the same host don't collide?
the only collision that can happen in practice given that we set things up to be on the top level of a single vhost, is the same host with different ports
| return Ok(ValidatedOhttpKeys { ohttp_keys: keys, relay_url: selected_relay }), | ||
| Ok(keys) => { | ||
| // Cache the keys if they are not already cached for this relay | ||
| if read_cached_ohttp_keys(&selected_relay).is_none() { |
There was a problem hiding this comment.
this reads from the cache a second time unnecessarily
| // try cache for this selected relay first | ||
| if let Some(cached) = read_cached_ohttp_keys(&selected_relay) { | ||
| println!("using Cached keys for relay: {selected_relay}"); | ||
| if !is_expired(&cached) && cached.relay_url == selected_relay { |
There was a problem hiding this comment.
if the relay_url were the key the comparison would not be necessary, but instead it's just the host part of this key. i think that only causes collisions when different ports are used, see other comment
| if let Some(cached) = read_cached_ohttp_keys(&selected_relay) { | ||
| tracing::info!("using Cached keys for relay: {selected_relay}"); | ||
| if !is_expired(&cached) && cached.relay_url == selected_relay { | ||
| return Ok(ValidatedOhttpKeys { |
There was a problem hiding this comment.
returning short circuits here which removes the need to read from the cache a second time for writeback
|
see also the key consistency reference of rfc 9458 which discusses strategies for shared caching, which i think we can include in the mailroom services... that might be sufficient motivation to not use a reqwest middleware for caching. |
|
as Dan pointed out, this should be keyed the directory url, not relay url, everywhere i wrote |
This pr adds the functionality of ohttp-keys catching to payjoin-cli , ohttp-keys should not be fetched each time and a cached key should be use. Keys expire in 6 months .
To do
Pull Request Checklist
Please confirm the following before requesting review:
AI
in the body of this PR.