Overview and Logs for the tini2p Dev Assembly Held on 2019-09-05

Posted by: el00ruobuob / oneiric

<tini2p_gitlab> assembly time!
<tini2p_gitlab> 0: Greetings
<tini2p_gitlab> l
<DavidBurkett> :wave:
<tini2p_gitlab> 🙂
<tini2p_gitlab> 1: What’s been achieved
<tini2p_gitlab> opened merge request for tunnels implementation: https://gitlab.com/tini2p/tini2p/merge_requests/15
<tini2p_gitlab> it is a considerable amount of work (round a month’s price), so I’ll go away it up for a bit (few days)
<tini2p_gitlab> going to take a day away from that code, and are available again to it for guide evaluation
<DavidBurkett> Superior! Is there anybody from i2p actively reviewing it? If not, I will evaluation with what little data I’ve
<tini2p_gitlab> bear in mind how I mentioned “hopefully, no extra main refactors”, effectively a lot for that. I neglected some elements of OutboundEndpoint and InboundGateway processing, and needed to refactor how I extract I2NP fragments from tunnel messages
<tini2p_gitlab> @DavidBurkett many thanks! I am the one lively reviewer, so any enter from you may be very welcome
<tini2p_gitlab> contemporary eyes are all the time useful
<DavidBurkett> :thumbsup: I will have a look
<tini2p_gitlab> there are nonetheless some points of peer choice that stay, principally round restrictions of friends from the identical /16 IPv4, and any given peer solely taking part in ~33% of lively tunnels
<tini2p_gitlab> unsure what layer of abstraction I ought to implement these restrictions. proper now, these restrictions make sense on the RouterContext layer for my impl, however TBD
<tini2p_gitlab> the RouterContext will in the end be pulling friends from NetDB, and feeding them into the PoolManager
<tini2p_gitlab> so the work in that PR represents what I have been working since final assembly
<tini2p_gitlab> quite a lot of time was spent on peer profiling and tunnel testing
<tini2p_gitlab> and the refactors for the OutboundEndpoint sending a TunnelGateway message when the supply is to a different tunnel
<tini2p_gitlab> from the prevailing I2P docs, it is not very clear when the totally different tunnel supply varieties are used (Native, Router, Tunnel)
<tini2p_gitlab> it took me implementing the issues, and fascinated about how the items match collectively to actually make sense of it
<tini2p_gitlab> mainly, afaiu, Native is utilized by the InboundGateway to inform the InboundEndpoint, “this message is for you”
<tini2p_gitlab> Router is utilized by the OutboundGateway to inform the OutboundEndpoint (and perhaps IBGW to IBEP) that the I2NP message needs to be straight delivered to a router
<tini2p_gitlab> e.g. for DatabaseStore and DatabaseLookup messages
<tini2p_gitlab> Tunnel supply is utilized by the OutboundGateway to inform the OutboundEndpoint (and perhaps IBGW to IBEP) that the I2NP message needs to be delivered to the InboundGateway of one other tunnel
<tini2p_gitlab> Router and Tunnel do not make sense to me for inbound tunnels, except some type of chaining is used. not clear that it ever is from the specs
<tini2p_gitlab> for instance, if Alice sends an I2NP message by an outbound tunnel to the IBGW of one other tunnel (with some indicator for chaining), and that IBGW makes use of Router or Tunnel supply to inform the IBEP to ship the I2NP message to the IBGW of one more tunnel
<tini2p_gitlab> perhaps that exists, however I’ve carried out IBGW solely utilizing Native supply to ship I2NP messages to the IBEP
<tini2p_gitlab> Native supply isn’t used for outbound tunnels
<tini2p_gitlab> anyway, figuring all that out took a little bit of time, and I could ship spec diffs upstream to make clear using the totally different supply varieties
<tini2p_gitlab> 2: What’s subsequent
<tini2p_gitlab> RouterContext
<tini2p_gitlab> and related courses
<tini2p_gitlab> this set of courses would be the fundamental brains tying all the pieces collectively
<tini2p_gitlab> RouterContext will use the transport (NTCP2) to speak on to different routers
<tini2p_gitlab> it’s going to use an I2NP handler to both move messages to the NetDB or Tunnel PoolManager for additional processing
<tini2p_gitlab> equally, ensuing messages from NetDB and Tunnel processing will probably be returned to the RouterContext, and despatched again out over NTCP2 to the suitable router
<tini2p_gitlab> the RouterContext can even be the principle proprietor of the router’s RouterInfo
<tini2p_gitlab> LeaseSets for native InboundTunnels can even be compiled by the RouterContext, and saved within the NetDB
<tini2p_gitlab> setting a objective for myself to have the RouterContext completed (in a minimally working sense) by subsequent Thursday (2019-09-12)
<tini2p_gitlab> with RouterContext completed/merged, I’ll tag a launch candidate
<tini2p_gitlab> I will probably be implementing a Docker/container take a look at community for integration/end-to-end testing, to make sure all of the items work collectively
<tini2p_gitlab> I can even make modifications to match the most recent updates to ECIES-X25519, posted by zzz this previous weekend
<tini2p_gitlab> ECIES-X25519 modifications can even go into the alpha launch candidate
<tini2p_gitlab> with RouterContext and ECIES-X25519 modifications in place, tini2p may have an internally constant implementation of I2P
<tini2p_gitlab> and I’ll tag the alpha launch per week after the discharge candidate
<tini2p_gitlab> after alpha launch, I’ll give attention to ElGamal tunnel constructing, and integration checks with Java I2P and i2pd
<tini2p_gitlab> end-to-end periods will nonetheless be tini2p-to-tini2p, however tunnel constructing needs to be potential by Java I2P and i2pd
<tini2p_gitlab> the Docker/container setup needs to be a pleasant testbed for inter-implementation testing, so hopefully it will likely be helpful for different implementations as effectively
<tini2p_gitlab> TBD
<tini2p_gitlab> post-alpha launch can even get some (an enormous quantity) of fuzz checks
<tini2p_gitlab> mainly each class with a deserializetechnique or buffer enter interface, will probably be getting a fuzz driver connected to it
<tini2p_gitlab> I might also be taking a small break from tini2p post-alpha launch to preemptively stop burnout
<tini2p_gitlab> these previous 9 months have been mainly coding tini2p on a regular basis, usually 10-12+ hours a day
<tini2p_gitlab> 3: Questions/feedback
<DavidBurkett> This undertaking construction is de facto clear, and code straightforward to observe. Engaged on constructing now. You are doing nice work
<tini2p_gitlab> aww :Three thanks @DavidBurkett meaning lots, very type
<tini2p_gitlab> nonetheless wants quite a lot of work to be production-ready, however I am doing my greatest to code effectively
<tini2p_gitlab> among the construction is closely influenced by parts of Java I2P, i2pd and ire. there are some actually good bits in these codebases
<DavidBurkett> Properly I am simply speaking your cmake recordsdata, your dealing with of dependencies, and many others. Very clear
<DavidBurkett> And I’ve seen i2pd. That is night time and day higher (though i2pd is fairly straightforward to work with)
<DavidBurkett> [edit] And I’ve seen i2pd. That is night time and day higher (though i2pd can also be fairly straightforward to work with)
<tini2p_gitlab> i2pd has very totally different undertaking targets and design. i2pd goals to be a one-stop-shop router, tini2p goals to be used as an I2P library by different tasks
<DavidBurkett> Ah, that is sensible
<DavidBurkett> I assumed i2pd was additionally meant for use as a library
<tini2p_gitlab> the CMake and dependencies took affect from Kovri (and by extension i2pd)
<tini2p_gitlab> i2pd can be used as a library
<tini2p_gitlab> what I imply is that i2pd comprises an HTTP server, router console, bundled shopper, all-in-one. considerably much like Java I2P, although with much less apps bundled in
<DavidBurkett> :thumbsup: Understood
<tini2p_gitlab> tini2p strikes away from that singleton design, into one thing extremely modular
<tini2p_gitlab> a future objective is to separate tini2p up additional, having frequent buildings be its personal repo, with separate core and shopper libs
<DavidBurkett> Is sensible. Yea, I pulled i2pd into Grin++ at one level, however I needed to decouple a bunch of issues to have the ability to simply pull in what I need. It wasn’t a simple activity
<tini2p_gitlab> that is considerably far down the road although, however it could assist stuff like pluggable end-to-end and signing crypto
<DavidBurkett> That is superior, you will get there for certain.
<tini2p_gitlab> 🙂 thanks
<tini2p_gitlab> alright, assume that is about all for this assembly. except you’ve got one thing else @DavidBurkett?
<DavidBurkett> Nothing from me
<tini2p_gitlab> for certain
<tini2p_gitlab> 4: Subsequent assembly
<tini2p_gitlab> 2019-09-19 18:00 UTC
<tini2p_gitlab> thanks all for attending (meaning you @DavidBurkett 😉
<DavidBurkett> 🙂
<tini2p_gitlab> @tini2p_gitlab stares intently on the gaffer, questioning if it feels

Put up tags : Dev Diaries, Kovri I2P Router, Cryptography

Read the original article here