diff --git a/.gitbook/assets/air_call_execution_1 (1).png b/.gitbook/assets/air_call_execution_1 (1).png new file mode 100644 index 0000000..8eaf925 Binary files /dev/null and b/.gitbook/assets/air_call_execution_1 (1).png differ diff --git a/.gitbook/assets/air_call_execution_1.png b/.gitbook/assets/air_call_execution_1.png new file mode 100644 index 0000000..8eaf925 Binary files /dev/null and b/.gitbook/assets/air_call_execution_1.png differ diff --git a/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (1).png b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (1).png new file mode 100644 index 0000000..094e274 Binary files /dev/null and b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (1).png differ diff --git a/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (2).png b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (2).png new file mode 100644 index 0000000..094e274 Binary files /dev/null and b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (2).png differ diff --git a/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3).png b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3).png new file mode 100644 index 0000000..094e274 Binary files /dev/null and b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3).png differ diff --git a/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3).png b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3).png new file mode 100644 index 0000000..094e274 Binary files /dev/null and b/.gitbook/assets/air_fold_4 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3).png differ diff --git a/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (1).png b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (1).png new file mode 100644 index 0000000..9c3bb75 Binary files /dev/null and b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (1).png differ diff --git a/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (2).png b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (2).png new file mode 100644 index 0000000..9c3bb75 Binary files /dev/null and b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (2).png differ diff --git a/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (3).png b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (3).png new file mode 100644 index 0000000..9c3bb75 Binary files /dev/null and b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1) (3).png differ diff --git a/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1).png b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1).png new file mode 100644 index 0000000..9c3bb75 Binary files /dev/null and b/.gitbook/assets/air_null_6 (1) (2) (2) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (3) (1).png differ diff --git a/.gitbook/assets/air_par_3.png b/.gitbook/assets/air_par_3.png new file mode 100644 index 0000000..e9609c1 Binary files /dev/null and b/.gitbook/assets/air_par_3.png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (1).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (1).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (1).png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (2).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (2).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (2).png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (3).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (3).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (3).png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (4).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (4).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (4).png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (5).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (5).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4) (5).png differ diff --git a/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4).png b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4).png new file mode 100644 index 0000000..a3364b2 Binary files /dev/null and b/.gitbook/assets/air_sequential_2 (1) (1) (1) (1) (1) (2) (3) (4) (4) (4).png differ diff --git a/.gitbook/assets/air_xor_5.png b/.gitbook/assets/air_xor_5.png new file mode 100644 index 0000000..c6f25b7 Binary files /dev/null and b/.gitbook/assets/air_xor_5.png differ diff --git a/.gitbook/assets/image (1).png b/.gitbook/assets/image (1).png new file mode 100644 index 0000000..0e378ff Binary files /dev/null and b/.gitbook/assets/image (1).png differ diff --git a/.gitbook/assets/image (10).png b/.gitbook/assets/image (10).png new file mode 100644 index 0000000..fe9c0d0 Binary files /dev/null and b/.gitbook/assets/image (10).png differ diff --git a/.gitbook/assets/image (11).png b/.gitbook/assets/image (11).png new file mode 100644 index 0000000..219ea5c Binary files /dev/null and b/.gitbook/assets/image (11).png differ diff --git a/.gitbook/assets/image (12).png b/.gitbook/assets/image (12).png new file mode 100644 index 0000000..a45fc2b Binary files /dev/null and b/.gitbook/assets/image (12).png differ diff --git a/.gitbook/assets/image (13).png b/.gitbook/assets/image (13).png new file mode 100644 index 0000000..b4c373c Binary files /dev/null and b/.gitbook/assets/image (13).png differ diff --git a/.gitbook/assets/image (14).png b/.gitbook/assets/image (14).png new file mode 100644 index 0000000..2ac620a Binary files /dev/null and b/.gitbook/assets/image (14).png differ diff --git a/.gitbook/assets/image (15).png b/.gitbook/assets/image (15).png new file mode 100644 index 0000000..eef0bca Binary files /dev/null and b/.gitbook/assets/image (15).png differ diff --git a/.gitbook/assets/image (16).png b/.gitbook/assets/image (16).png new file mode 100644 index 0000000..7bb6c6b Binary files /dev/null and b/.gitbook/assets/image (16).png differ diff --git a/.gitbook/assets/image (17).png b/.gitbook/assets/image (17).png new file mode 100644 index 0000000..207d8a2 Binary files /dev/null and b/.gitbook/assets/image (17).png differ diff --git a/.gitbook/assets/image (18) (1) (1) (2) (2) (1).png b/.gitbook/assets/image (18) (1) (1) (2) (2) (1).png new file mode 100644 index 0000000..971b427 Binary files /dev/null and b/.gitbook/assets/image (18) (1) (1) (2) (2) (1).png differ diff --git a/.gitbook/assets/image (18) (1) (1) (2) (2).png b/.gitbook/assets/image (18) (1) (1) (2) (2).png new file mode 100644 index 0000000..971b427 Binary files /dev/null and b/.gitbook/assets/image (18) (1) (1) (2) (2).png differ diff --git a/.gitbook/assets/image (18) (1) (1).png b/.gitbook/assets/image (18) (1) (1).png new file mode 100644 index 0000000..971b427 Binary files /dev/null and b/.gitbook/assets/image (18) (1) (1).png differ diff --git a/.gitbook/assets/image (18).png b/.gitbook/assets/image (18).png new file mode 100644 index 0000000..5b088f3 Binary files /dev/null and b/.gitbook/assets/image (18).png differ diff --git a/.gitbook/assets/image (19) (1).png b/.gitbook/assets/image (19) (1).png new file mode 100644 index 0000000..9d7fe1f Binary files /dev/null and b/.gitbook/assets/image (19) (1).png differ diff --git a/.gitbook/assets/image (19).png b/.gitbook/assets/image (19).png new file mode 100644 index 0000000..9d7fe1f Binary files /dev/null and b/.gitbook/assets/image (19).png differ diff --git a/.gitbook/assets/image (2).png b/.gitbook/assets/image (2).png new file mode 100644 index 0000000..248a542 Binary files /dev/null and b/.gitbook/assets/image (2).png differ diff --git a/.gitbook/assets/image (20).png b/.gitbook/assets/image (20).png new file mode 100644 index 0000000..fec5e38 Binary files /dev/null and b/.gitbook/assets/image (20).png differ diff --git a/.gitbook/assets/image (21).png b/.gitbook/assets/image (21).png new file mode 100644 index 0000000..68d72ed Binary files /dev/null and b/.gitbook/assets/image (21).png differ diff --git a/.gitbook/assets/image (22).png b/.gitbook/assets/image (22).png new file mode 100644 index 0000000..7ed12d3 Binary files /dev/null and b/.gitbook/assets/image (22).png differ diff --git a/.gitbook/assets/image (23).png b/.gitbook/assets/image (23).png new file mode 100644 index 0000000..7797b97 Binary files /dev/null and b/.gitbook/assets/image (23).png differ diff --git a/.gitbook/assets/image (24) (1) (1).png b/.gitbook/assets/image (24) (1) (1).png new file mode 100644 index 0000000..9ea4d1e Binary files /dev/null and b/.gitbook/assets/image (24) (1) (1).png differ diff --git a/.gitbook/assets/image (24) (1).png b/.gitbook/assets/image (24) (1).png new file mode 100644 index 0000000..9ea4d1e Binary files /dev/null and b/.gitbook/assets/image (24) (1).png differ diff --git a/.gitbook/assets/image (24).png b/.gitbook/assets/image (24).png new file mode 100644 index 0000000..2b4e881 Binary files /dev/null and b/.gitbook/assets/image (24).png differ diff --git a/.gitbook/assets/image (25).png b/.gitbook/assets/image (25).png new file mode 100644 index 0000000..c0fd654 Binary files /dev/null and b/.gitbook/assets/image (25).png differ diff --git a/.gitbook/assets/image (26).png b/.gitbook/assets/image (26).png new file mode 100644 index 0000000..7db04be Binary files /dev/null and b/.gitbook/assets/image (26).png differ diff --git a/.gitbook/assets/image (27) (1).png b/.gitbook/assets/image (27) (1).png new file mode 100644 index 0000000..4a02121 Binary files /dev/null and b/.gitbook/assets/image (27) (1).png differ diff --git a/.gitbook/assets/image (27).png b/.gitbook/assets/image (27).png new file mode 100644 index 0000000..be35a96 Binary files /dev/null and b/.gitbook/assets/image (27).png differ diff --git a/.gitbook/assets/image (28) (1) (1).png b/.gitbook/assets/image (28) (1) (1).png new file mode 100644 index 0000000..edead8d Binary files /dev/null and b/.gitbook/assets/image (28) (1) (1).png differ diff --git a/.gitbook/assets/image (28) (1).png b/.gitbook/assets/image (28) (1).png new file mode 100644 index 0000000..edead8d Binary files /dev/null and b/.gitbook/assets/image (28) (1).png differ diff --git a/.gitbook/assets/image (28).png b/.gitbook/assets/image (28).png new file mode 100644 index 0000000..d24047f Binary files /dev/null and b/.gitbook/assets/image (28).png differ diff --git a/.gitbook/assets/image (29).png b/.gitbook/assets/image (29).png new file mode 100644 index 0000000..bc335d7 Binary files /dev/null and b/.gitbook/assets/image (29).png differ diff --git a/.gitbook/assets/image (3).png b/.gitbook/assets/image (3).png new file mode 100644 index 0000000..c042279 Binary files /dev/null and b/.gitbook/assets/image (3).png differ diff --git a/.gitbook/assets/image (30).png b/.gitbook/assets/image (30).png new file mode 100644 index 0000000..1dd2bff Binary files /dev/null and b/.gitbook/assets/image (30).png differ diff --git a/.gitbook/assets/image (31) (1).png b/.gitbook/assets/image (31) (1).png new file mode 100644 index 0000000..702129b Binary files /dev/null and b/.gitbook/assets/image (31) (1).png differ diff --git a/.gitbook/assets/image (31).png b/.gitbook/assets/image (31).png new file mode 100644 index 0000000..702129b Binary files /dev/null and b/.gitbook/assets/image (31).png differ diff --git a/.gitbook/assets/image (32).png b/.gitbook/assets/image (32).png new file mode 100644 index 0000000..4a02121 Binary files /dev/null and b/.gitbook/assets/image (32).png differ diff --git a/.gitbook/assets/image (33).png b/.gitbook/assets/image (33).png new file mode 100644 index 0000000..e27e314 Binary files /dev/null and b/.gitbook/assets/image (33).png differ diff --git a/.gitbook/assets/image (34).png b/.gitbook/assets/image (34).png new file mode 100644 index 0000000..be35a96 Binary files /dev/null and b/.gitbook/assets/image (34).png differ diff --git a/.gitbook/assets/image (35).png b/.gitbook/assets/image (35).png new file mode 100644 index 0000000..2cde767 Binary files /dev/null and b/.gitbook/assets/image (35).png differ diff --git a/.gitbook/assets/image (36).png b/.gitbook/assets/image (36).png new file mode 100644 index 0000000..36e97be Binary files /dev/null and b/.gitbook/assets/image (36).png differ diff --git a/.gitbook/assets/image (37).png b/.gitbook/assets/image (37).png new file mode 100644 index 0000000..efe6f7b Binary files /dev/null and b/.gitbook/assets/image (37).png differ diff --git a/.gitbook/assets/image (38) (1).png b/.gitbook/assets/image (38) (1).png new file mode 100644 index 0000000..7f714d6 Binary files /dev/null and b/.gitbook/assets/image (38) (1).png differ diff --git a/.gitbook/assets/image (38) (2) (2) (2) (1).png b/.gitbook/assets/image (38) (2) (2) (2) (1).png new file mode 100644 index 0000000..7f714d6 Binary files /dev/null and b/.gitbook/assets/image (38) (2) (2) (2) (1).png differ diff --git a/.gitbook/assets/image (38) (2) (2) (2).png b/.gitbook/assets/image (38) (2) (2) (2).png new file mode 100644 index 0000000..7f714d6 Binary files /dev/null and b/.gitbook/assets/image (38) (2) (2) (2).png differ diff --git a/.gitbook/assets/image (38).png b/.gitbook/assets/image (38).png new file mode 100644 index 0000000..d52f3c9 Binary files /dev/null and b/.gitbook/assets/image (38).png differ diff --git a/.gitbook/assets/image (39) (1) (1).png b/.gitbook/assets/image (39) (1) (1).png new file mode 100644 index 0000000..cddbcc2 Binary files /dev/null and b/.gitbook/assets/image (39) (1) (1).png differ diff --git a/.gitbook/assets/image (39) (1).png b/.gitbook/assets/image (39) (1).png new file mode 100644 index 0000000..cddbcc2 Binary files /dev/null and b/.gitbook/assets/image (39) (1).png differ diff --git a/.gitbook/assets/image (39).png b/.gitbook/assets/image (39).png new file mode 100644 index 0000000..1df0991 Binary files /dev/null and b/.gitbook/assets/image (39).png differ diff --git a/.gitbook/assets/image (4).png b/.gitbook/assets/image (4).png new file mode 100644 index 0000000..f85f881 Binary files /dev/null and b/.gitbook/assets/image (4).png differ diff --git a/.gitbook/assets/image (40).png b/.gitbook/assets/image (40).png new file mode 100644 index 0000000..a83ff8c Binary files /dev/null and b/.gitbook/assets/image (40).png differ diff --git a/.gitbook/assets/image (41).png b/.gitbook/assets/image (41).png new file mode 100644 index 0000000..3ed78ad Binary files /dev/null and b/.gitbook/assets/image (41).png differ diff --git a/.gitbook/assets/image (42).png b/.gitbook/assets/image (42).png new file mode 100644 index 0000000..2f40113 Binary files /dev/null and b/.gitbook/assets/image (42).png differ diff --git a/.gitbook/assets/image (43).png b/.gitbook/assets/image (43).png new file mode 100644 index 0000000..ee4e3d4 Binary files /dev/null and b/.gitbook/assets/image (43).png differ diff --git a/.gitbook/assets/image (44).png b/.gitbook/assets/image (44).png new file mode 100644 index 0000000..249a787 Binary files /dev/null and b/.gitbook/assets/image (44).png differ diff --git a/.gitbook/assets/image (45).png b/.gitbook/assets/image (45).png new file mode 100644 index 0000000..4f41008 Binary files /dev/null and b/.gitbook/assets/image (45).png differ diff --git a/.gitbook/assets/image (46).png b/.gitbook/assets/image (46).png new file mode 100644 index 0000000..46b2511 Binary files /dev/null and b/.gitbook/assets/image (46).png differ diff --git a/.gitbook/assets/image (47).png b/.gitbook/assets/image (47).png new file mode 100644 index 0000000..42bef6a Binary files /dev/null and b/.gitbook/assets/image (47).png differ diff --git a/.gitbook/assets/image (48).png b/.gitbook/assets/image (48).png new file mode 100644 index 0000000..e1c7876 Binary files /dev/null and b/.gitbook/assets/image (48).png differ diff --git a/.gitbook/assets/image (49).png b/.gitbook/assets/image (49).png new file mode 100644 index 0000000..6d809df Binary files /dev/null and b/.gitbook/assets/image (49).png differ diff --git a/.gitbook/assets/image (5).png b/.gitbook/assets/image (5).png new file mode 100644 index 0000000..788c0f7 Binary files /dev/null and b/.gitbook/assets/image (5).png differ diff --git a/.gitbook/assets/image (50).png b/.gitbook/assets/image (50).png new file mode 100644 index 0000000..6d809df Binary files /dev/null and b/.gitbook/assets/image (50).png differ diff --git a/.gitbook/assets/image (51).png b/.gitbook/assets/image (51).png new file mode 100644 index 0000000..f58cd45 Binary files /dev/null and b/.gitbook/assets/image (51).png differ diff --git a/.gitbook/assets/image (6) (1) (1).png b/.gitbook/assets/image (6) (1) (1).png new file mode 100644 index 0000000..b4ea5bf Binary files /dev/null and b/.gitbook/assets/image (6) (1) (1).png differ diff --git a/.gitbook/assets/image (6) (1).png b/.gitbook/assets/image (6) (1).png new file mode 100644 index 0000000..b4ea5bf Binary files /dev/null and b/.gitbook/assets/image (6) (1).png differ diff --git a/.gitbook/assets/image (6).png b/.gitbook/assets/image (6).png new file mode 100644 index 0000000..f6004d1 Binary files /dev/null and b/.gitbook/assets/image (6).png differ diff --git a/.gitbook/assets/image (7).png b/.gitbook/assets/image (7).png new file mode 100644 index 0000000..247b604 Binary files /dev/null and b/.gitbook/assets/image (7).png differ diff --git a/.gitbook/assets/image (8).png b/.gitbook/assets/image (8).png new file mode 100644 index 0000000..f87ef80 Binary files /dev/null and b/.gitbook/assets/image (8).png differ diff --git a/.gitbook/assets/image (9).png b/.gitbook/assets/image (9).png new file mode 100644 index 0000000..ac33e84 Binary files /dev/null and b/.gitbook/assets/image (9).png differ diff --git a/.gitbook/assets/image.png b/.gitbook/assets/image.png new file mode 100644 index 0000000..95715dc Binary files /dev/null and b/.gitbook/assets/image.png differ diff --git a/README.md b/README.md index 114f80e..362d98c 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ Fluence provides an open Web3 protocol, framework and tooling to develop and hos The Fluence Web3 stack enables -* programmable network requests +* programmable network requests * distributed applications from composition without centralization * communication, access and transactional security as first class citizens * extensibility through adapter/wrapper services @@ -18,13 +18,12 @@ Additional resources and support are available: * [Youtube](https://www.youtube.com/channel/UC3b5eFyKRFlEMwSJ1BTjpbw) * [Github](https://github.com/fluencelabs) -* [Discord](https://discord.gg/aR2AYErM) -* [Telegram](https://t.me/fluence_project) -* [Twitter](https://twitter.com/fluence_project) +* [Discord](https://fluence.chat) +* [Telegram](https://t.me/fluence\_project) +* [Twitter](https://twitter.com/fluence\_project) Documentation is work in progress and your feedback is extremely valuable and much appreciated. If you have suggestions or unearth errors or inaccuracies, please open an Issue or push a PR. Thank You and Enjoy, The Fluence Team - diff --git a/SUMMARY.md b/SUMMARY.md new file mode 100644 index 0000000..562071e --- /dev/null +++ b/SUMMARY.md @@ -0,0 +1,34 @@ +# Table of contents + +* [Introduction](README.md) +* [Thinking In Aquamarine](p2p.md) +* [Concepts](concepts.md) +* [Quick Start](quick-start/README.md) + * [1. Browser-to-Browser](quick-start/1.-browser-to-browser-1.md) + * [2. Hosted Services](quick-start/2.-hosted-services.md) + * [3. Browser-to-Service](quick-start/3.-browser-to-service.md) + * [4. Service Composition And Reuse With Aqua](quick-start/4.-service-composition-and-reuse-with-aqua.md) + * [5. Decentralized Oracles With Fluence And Aqua](quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md) +* [Aquamarine](knowledge\_aquamarine/README.md) + * [Aqua](knowledge\_aquamarine/hll.md) + * [Marine](knowledge\_aquamarine/marine/README.md) + * [Marine CLI](knowledge\_aquamarine/marine/marine-cli.md) + * [Marine REPL](knowledge\_aquamarine/marine/marine-repl.md) + * [Marine Rust SDK](knowledge\_aquamarine/marine/marine-rs-sdk.md) +* [Tools](knowledge\_tools.md) +* [Node](node.md) +* [Fluence JS](fluence-js/README.md) + * [Concepts](fluence-js/1\_concepts.md) + * [Basics](fluence-js/2\_basics.md) + * [Running app in nodejs](fluence-js/5\_run\_in\_node.md) + * [Running app in browser](fluence-js/4\_run\_in\_browser-1.md) + * [In-depth](fluence-js/3\_in\_depth.md) + * [API reference](fluence-js/6-reference.md) + * [Changelog](fluence-js/changelog.md) +* [Security](knowledge\_security.md) +* [Tutorials](tutorials\_tutorials/README.md) + * [Setting Up Your Environment](tutorials\_tutorials/recipes\_setting\_up.md) + * [Deploy A Local Fluence Node](tutorials\_tutorials/tutorial\_run\_local\_node.md) + * [cUrl As A Service](tutorials\_tutorials/curl-as-a-service.md) + * [Add Your Own Builtins](tutorials\_tutorials/add-your-own-builtin.md) +* [Research, Papers And References](research-papers-and-references.md) diff --git a/concepts.md b/concepts.md new file mode 100644 index 0000000..b5e2261 --- /dev/null +++ b/concepts.md @@ -0,0 +1,143 @@ +# Concepts + +## Concepts + +The Fluence solution enables a new class of decentralized Web3 solutions providing technical, security and business benefits otherwise not available. In order for solution architects and developers to realize these benefits, a shift in philosophy and implementation is required. With the Fluence tool chain available, developers should find it possible to code meaningful Web3 solutions in short order once an understanding of the core concepts and Aqua is in place. + +The remainder of this section introduces the core concepts underlying the Fluence solution. + +### **Particles** + +Particles are Fluence's secure distributed state medium, i.e., conflict free replication data structures containing application data, workflow scripts and some metadata, that traverse programmatically specified routes in a highly secure manner. That is, _particles_ hop from distributed compute service to distributed compute service across the peer-to-peer network as specified by the application workflow updating along the way. + +Figure 4: Node-Service Perspective Of Particle Workflow + +![](https://i.imgur.com/u4beJgh.png) + +Not surprisingly, particles are an integral part of the Fluence protocol and stack. It is the very decoupling of data + workflow instructions from the service and network components that allows the secure composition of applications from services distributed across a permissionless peer-to-peer network. + +While the application state change resulting from passing a particle "through" a service with respect of the data components is quite obvious, the ensuing state change with respect to the workflow also needs to be recognized, which is handled by the Aqua VM. + +As depicted in Figure 4, a particle traverses to a destination node's Aqua VM where the next execution step is evaluated and, if specified, triggered. That is, the service programmatically specified to operate on the particle's data is called from the Aqua VM, the particle's data and workflow (step) are updated and the Aqua VM routes the particle to the next specified destination, which may be on the same, another or the client peer. + +### **Aqua** + +An integral enabler of the Fluence solution is Aqua, an open source language purpose-built to enable developers to ergonomically program distributed networks and applications by composition. Aqua scripts compile to an intermediary representation, called AIR, which execute on the Aqua Virtual Machine, Aqua VM, itself a Wasm module hosted on the Marine interpreter on every peer node. + +Figure 5: From Aqua Script To Particle Execution + +![](<.gitbook/assets/image (5).png>) + +Currently, compiled Aqua scripts can be executed from Typescript clients based on [Fluence SDK](https://github.com/fluencelabs/fluence-js). For more information about Aqua, see the [Aqua book](https://doc.fluence.dev/aqua-book/). + +### **Marine** + +Marine is Fluence's generalized Wasm runtime executing Wasm Interface Type (IT) modules with Aqua VM compatible interfaces on each peer. Let's unpack. + +Services behave similarly to microservices: They are created on nodes and served by the Marine VM and can _only_ be called by Aqua VM. They also are passive in that they can accept incoming calls but can't initiate an outgoing request without being called. + +Services are + +* comprised of Wasm IT modules that can be composed into applications +* developed in Rust for a wasm32-wasi compile target +* deployed on one or more nodes +* running on the Marine VM which is deployed to every node + +Figure 6: Stylized Execution Flow On Peer + +![](<.gitbook/assets/image (6).png>) + +Please note that the Aqua VM is itself a Wasm module running on the Marine VM. + +The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) abstracts the Wasm IT implementation detail behind a handy macro that allows developers to easily create Marine VM compatible Wasm modules. In the example below, applying the `marine` macro turns a basic Rust function into a Wasm IT compatible function and enforces the types requirements at the compiler level. + +```rust +#[marine] +pub fn greeting(name: String) -> String { + format!("Hi, {}", name) +} +``` + +### **Services** + +Services are logical constructs instantiated from Wasm modules that contain some business logic and configuration data. That is, services are created, i.e., linked, at the Marine VM runtime level from uploaded Wasm modules and the relevant metadata + +_Blueprints_ are json documents that provide the necessary information to build a service from the associated Wasm modules. + +Figure 7: Service Composition and Execution Model + +![](<.gitbook/assets/image (7).png>) + +Please note that services are not capable to accept more than one request at any given time. Consider a service, FooBar, comprised of two functions, foo() and bar() where foo is a longer running function. + +``` +-- Stylized FooBar service with two functions +-- foo() and bar() +-- foo is long-running +--- if foo is called before bar, bar is blocked +service FooBar("service-id"): + bar() -> string + foo() -> string --< long running function + +func foobar(node:string, service_id:string, func_name:string) -> string: + res: *string + on node: + BlockedService service_id + if func_name == "foo": + res <- BlockedService.foo() + else: + res <- BlockedService.bar() + <- res! +``` + +As long as foo() is running, the entire FooBar service, including bar(), is blocked. This has implications with respect to both service granularity and redundancy. + +### **Modules** + +In the Fluence solution, Wasm IT modules take one of three forms: + +* Facade Module: expose the API of the service comprised of one or more modules. Every service has exactly one facade module +* Pure Module: perform computations without side-effects +* Effector Module: perform at least one computation with a side-effect + +It is important for architects and developers to be aware of their module and services construction with respect to state changes. + +### **Authentication And Permissioning** + +Authentication at the service API level is an inherent feature of the Fluence solution. This fine-grained approach essentially provides [ambient authority](https://en.wikipedia.org/wiki/Ambient\_authority) out of the box. + +In the Fluence solution, this is accomplished by a SecurityTetraplet, which is data structure with four data fields: + +```rust +struct SecurityTetraplet: + peer_id: string + service_id: string + fn_name: string + getter: string +``` + +SecurityTetraplets are provided with the function call arguments for each (service) function call and are checked by the called service. Hence, authentication based on the **(service caller id == service owner id)** relation can be established at service ingress and leveraged to build powerful, fine-grained identity and access management solutions enabling true zero trust architectures. + +### **Trust Layer** + +The Fluence protocol offers an alternative to node selection, i.e. connection and permissioning, approaches, such as [Kademlia](https://en.wikipedia.org/wiki/Kademlia), called TrustGraph. A TrustGraph is comprised of subjectively weights assigned to nodes to manage peer connections. TrustGraphs are node operator specific and transitive. That is, a trusted node's trusted neighbors are considered trustworthy. + +{% hint style="info" %} +[TrustGraph](https://github.com/fluencelabs/trust-graph) is currently under active development. Please check the repo for progress. +{% endhint %} + +## **Application** + +An application is the "frontend" to one or more services and their execution sequence. Applications are developed by coordinating one or more services into a logical compute unit and tend to live outside the Fluence network**,** e.g., the browser as a peer-client. They can be executed in various runtime environments ranging from browsers to backend daemons. + +#### **Scaling Applications** + +As discussed previously, decoupling at the network and business logic levels is at the core of the Fluence protocol and provides the major entry points for scaling solutions. + +At the peer-to-peer network level, scaling can be achieved with subnetworks. Subnetworks are currently under development and we will update this section in the near future. + +At the service level, we can achieve scale through parallelization due to the decoupling of resource management from infrastructure. That is, sequential and parallel execution flow logic are an inherent part of Aqua's programming model. In order to be able to achieve concurrency, the target services need to be available on multiple peers as module calls are blocking. + +Figure 8: Stylized Par Execution + +![](<.gitbook/assets/image (8).png>) diff --git a/fluence-js/1_concepts.md b/fluence-js/1_concepts.md new file mode 100644 index 0000000..465390f --- /dev/null +++ b/fluence-js/1_concepts.md @@ -0,0 +1,43 @@ +# Concepts + +## Creating applications with Aqua language + +The official way to write applications for Fluence is using Aqua programming language. Aqua compiler emits TypeScript or JavaScript which in turn can be called from a js-based environment. The compiler outputs code for the following entities: + +1. Exported `func` declarations are turned into callable async functions +2. Exported `service` declarations are turned into functions which register callback handler in a typed manner + +To learn more about Aqua see [aqua book](https://doc.fluence.dev/aqua-book/) + +The building block of the application are: + +* Aqua code for peer-to-peer communication +* Compiler cli package for aqua to (java)typescript compilation +* Initialization of the `FluencePeer` +* Application specific code (java)typescript in the framework of your choice + +In the next section we see it in action + +## Facade API + +The main entry point `@fluencelabs/fluence` is `Fluence` facade. It provides easy way to start and stop the Fluence Peer. The facade API is enough for the most of the uses cases. + +## Fluence peer in JS + +`@fluencelabs/fluence` package also exports the `FluencePeer` class. This class implements the Fluence protocol for javascript-based environments. It provides all the necessary features to communicate with Fluence network namely: + +1. Connectivity with one or many Fluence Node which allows sending particles to and receiving from other Peers +2. The Peer Id identifying the node in the network +3. Aqua VM which allows the execution of air scripts inside particles +4. A set of builtin functions required by Fluence protocol +5. Support for the typescript code which is generated by Aqua compiler + +Even though the js-based implementation closely resembles [node](https://doc.fluence.dev/docs/node) there are some considerable differences to the latter. + +`FluencePeer` does not host services composed of wasm modules. Instead it allows to register service call handlers directly in javascript. The Aqua language compiler creates a typed helpers for that task. + +Due to the limitations of browser-based environment `FluencePeer` cannot be discovered by it's Peer Id on it's own. To overcome this `FluencePeer` must use an existing node which will act as a `relay`. When a peer is connected through a relay it is considered to be `client`. The `FluencePeer` routes all it's particle through it's relay thus taking advantage of the peer discovery implemented on the node. A particle sent to the connected client must be routed through it's relay. + +The js-based peer does not implement the full set of builtin functions due the limitations described previously. E.g there is no built-ins implementation for _kad_ or _srv_ services. However _op_ service is fully implemented. For the full descriptions of implemented built-ins refer to [Api reference](https://doc.fluence.dev/docs/fluence-js/6-reference) + +In contrast with the node implementation `FluencePeer` can initiate new particles execution. Aqua compiler generates executable functions from `func` definitions in aqua code. diff --git a/fluence-js/2_basics.md b/fluence-js/2_basics.md new file mode 100644 index 0000000..e024264 --- /dev/null +++ b/fluence-js/2_basics.md @@ -0,0 +1,194 @@ +# Basics + +## Intro + +In this section we will show you how Fluence JS can be used to create a hello world application with Fluence stack. + +## Aqua code + +Let's start with the aqua code first: + +``` +import Peer from "@fluencelabs/aqua-lib/builtin.aqua" -- (1) + +service HelloWorld("hello-world"): -- (2) + hello(str: string) + getFortune() -> string + +func sayHello(): -- (3) + HelloWorld.hello("Hello, world!") + +func tellFortune() -> string: -- (4) + res <- HelloWorld.getFortune() + <- res + +func getRelayTime() -> u64: -- (5) + on HOST_PEER_ID: + ts <- Peer.timestamp_ms() + <- ts +``` + +We need to import definitions to call standard Peer operations (1) + +This file has three definitions. + +(2) is a service named `HelloWorld`. A Service interfaces functions executable on a peer. We will register a handler for this interface in our typescript application. + +(3) and (4) are functions `sayHello` and `tellFortune` correspondingly. These functions very simple. The only thing the first one does is calling the `hello` method of `HelloWorld` service located on the current peer. Similarly `tellFortune` calls the `getFortune` method from the same service and returns the value to the caller. We will show you how to call these function from the typescript application. + +Finally we have a function (5) which demonstrate how to work with the network. It asks the current time from the relay peer and return back the our peer. + +## Installing dependencies + +Initialize an empty npm package: + +```bash +npm init +``` + +We will need these two packages for the application runtime + +```bash +npm install @fluencelabs/fluence @fluencelabs/fluence-network-environment +``` + +The first one is the SDK itself and the second is a maintained list of Fluence networks and nodes to connect to. + +Aqua compiler cli has to be installed, but is not needed at runtime. + +```bash +npm install --save-dev @fluencelabs/aqua +``` + +Aqua comes with the standard library which can accessed from "@fluencelabs/aqua-lib" package. All the aqua packages are only needed at compiler time, so we install it as a development dependency + +```bash +npm install --save-dev @fluencelabs/aqua-lib +``` + +Also we might want to have aqua source files automatically recompiled on every save. We will take advantage of chokidar for that: + +```bash +npm install --save-dev chokidar-cli +``` + +And last, but no least we will need TypeScript + +``` +npm install --save-dev typescript +npx tsc --init +``` + +## Setting up aqua compiler + +Let's put aqua described earlier into `aqua/hello-world.aqua` file. You probably want to keep the generated TypeScript in the same directory with other typescript files, usually `src`. Let's create the `src/_aqua` directory for that. + +The overall project structure looks like this: + +``` + ┣ aqua + ┃ ┗ hello-world.aqua + ┣ src + ┃ ┣ _aqua + ┃ ┃ ┗ hello-world.ts + ┃ ┗ index.ts + ┣ package-lock.json + ┣ package.json + ┗ tsconfig.json +``` + +The Aqua compiler can be run with `npm`: + +```bash +npx aqua -i ./aqua/ -o ./src/_aqua +``` + +We recommend to store this logic inside a script in `packages.json` file: + +```javascript +{ + ... + "scripts": { + ... + "compile-aqua": "aqua -i ./aqua/ -o ./src/_aqua", // (1) + "watch-aqua": "chokidar \"**/*.aqua\" -c \"npm run compile-aqua\"" // (2) + }, + ... +} +``` + +`compile-aqua` (1) runs the compilation once, producing `src/_aqua/hello-world.ts` in our case `watch-aqua` (2) starts watching for any changes in .aqua files recompiling them on the fly + +## Using the compiled code in typescript application + +Using the code generated by the compiler is as easy as calling a function. The compiler generates all the boilerplate needed to send a particle into the network and wraps it into a single call. It also generate a function for service callback registration. Note that all the type information and therefore type checking and code completion facilities are there! + +Let's see how use generated code in our application. `index.ts`: + +```typescript +import { Fluence } from "@fluencelabs/fluence"; +import { krasnodar } from "@fluencelabs/fluence-network-environment"; // (1) +import { + registerHelloWorld, + sayHello, + getRelayTime, + tellFortune, +} from "./_aqua/hello-world"; // (2) + +async function main() { + await Fluence.start({ connectTo: krasnodar[0] }); // (3) + + // (4) + registerHelloWorld({ + hello: (str) => { + console.log(str); + }, + getFortune: async () => { + await new Promise((resolve) => { + setTimeout(resolve, 1000); + }); + return "Wealth awaits you very soon."; + }, + }); + + await sayHello(); // (4) + + console.log(await tellFortune()); // (6) + + const relayTime = await getRelayTime(); + + console.log("The relay time is: ", new Date(relayTime).toLocaleString()); + + await Fluence.stop(); // (7) +} + +main(); +``` + +(1) Import list of possible relay nodes (network environment) + +(2) Aqua compiler provides functions which can be directly imported like any normal typescript function. + +(3) A Fluence peer has to be started before running any application in Fluence Network. For the vast majority of use cases you should use `Fluence` facade to start and stop the peer. The `start` method accepts a parameters object which. The most common parameter is the address of the relay node the peer should connect to. In this example we are using the first node of the `krasnodar` network. If you do not specify the `connectTo` options will only be able to execute air on the local machine only. Please keep in mind that the init function is asynchronous. + +For every exported `service XXX` definition in aqua code, the compiler provides a `registerXXX` counterpart. These functions provide a type-safe way of registering callback handlers for the services. The callbacks are executed when the appropriate service is called in aqua on the current peer. The handlers take form of the object where keys are the name of functions and the values are async functions used as the corresponding callbacks. For example in (4) we are registering handlers for `HelloWorld` service functions which outputs it's parameter to the console. Please note that the handlers can be implemented in both: synchronous and asynchronous way. The handler can be made asynchronous like any other function in javascript: either return a Promise or mark it with async keyword to take advantage of async-await pattern. + +For every exported `func XXX` definition in aqua code, the compiler provides an async function which can be directly called from typescript. In (5, 6) we are calling exported aqua function with no arguments. Note that every function is asynchronous. + +(7) You should call `stop` when the peer is no longer needed. As a rule of thumb all the peers should be uninitialized before destroying the application. + +Let's try running the example: + +```bash +node -r ts-node/register src/index.ts +``` + +If everything has been done correctly yuo should see `Hello, world!` in the console. + +The next section will cover in-depth and advanced usage of Fluence JS + +The code from this section is available in on [github](https://github.com/fluencelabs/examples/tree/main/fluence-js-examples/hello-world) + +## Running Fluence application in different environments + +Fluence JS instantiates Aqua Virtual Machine (AVM) from wasm file and runs it in the background thread. Different mechanism are used depending on the JS environment. In nodejs worker threads are used for background execution and wasm file is read from the filesystem. In browser-based environments web workers are used and the wasm file is being loaded from server hosting the application. Next two sections cover how to configure a fluence application depending on the environment. diff --git a/fluence-js/3_in_depth.md b/fluence-js/3_in_depth.md new file mode 100644 index 0000000..4519292 --- /dev/null +++ b/fluence-js/3_in_depth.md @@ -0,0 +1,529 @@ +# In-depth + +## Intro + +In this section we will cover the Fluence JS in-depth. + +## Fluence + +`@fluencelabs/fluence` exports a facade `Fluence` which provides all the needed functionality for the most uses cases. It defined 4 functions: + +* `start`: Start the default peer. +* `stop`: Stops the default peer +* `getStatus`: Gets the status of the default peer. This includes connection +* `getPeer`: Gets the default Fluence Peer instance (see below) + +Under the hood `Fluence` facade calls the corresponding method on the default instance of FluencePeer. This instance is passed to the Aqua-compiler generated functions by default. + +## FluencePeer class + +The second export `@fluencelabs/fluence` package is `FluencePeer` class. It is useful in scenarios when the application need to run several different peer at once. The overall workflow with the `FluencePeer` is the following: + +1. Create an instance of the peer +2. Starting the peer +3. Using the peer in the application +4. Stopping the peer + +To create a new peer simple instantiate the `FluencePeer` class: + +```typescript +const peer = new FluencePeer(); +``` + +The constructor simply creates a new object and does not initialize any workflow. The `start` function starts the Aqua VM, initializes the default call service handlers and (optionally) connect to the Fluence network. The function takes an optional object specifying additional peer configuration. On option you will be using a lot is `connectTo`. It tells the peer to connect to a relay. For example: + +```typescript +await peer.star({ + connectTo: krasnodar[0], +}); +``` + +connects to the first node of the Krasnodar network. You can find the officially maintained list networks in the `@fluencelabs/fluence-network-environment` package. The full list of supported options is described in the [API reference](https://github.com/fluencelabs/gitbook-docs/tree/77344eb147c2ce17fe1c0f37013082fc85c1ffa3/js-sdk/js-sdk/6\_reference/modules.md) + +```typescript +await peer.stop(); +``` + +## Using multiple peers in one application + +The peer by itself does not do any useful work. You should take advantage of functions generated by the Aqua compiler. + +If your application needs several peers, you should create a separate `FluencePeer` instance for each of them. The generated functions accept the peer as the first argument. For example: + +```typescript +import { FluencePeer } from "@fluencelabs/fluence"; +import { + registerSomeService, + someCallableFunction, +} from "./_aqua/someFunction"; + +async function main() { + const peer1 = new FluencePeer(); + const peer2 = new FluencePeer(); + + // Don't forget to initialize peers + await peer1.start({ + connectTo: relay, + }); + await peer2.start({ + connectTo: relay, + }); + + // ... more application logic + + // Pass the peer as the first argument + // || + // \/ + registerSomeService(peer1, { + handler: async (str) => { + console.log("Called service on peer 1: " str); + }, + }); + + // Pass the peer as the first argument + // || + // \/ + registerSomeService(peer2, { + handler: async (str) => { + console.log("Called service on peer 2: " str); + }, + }); + + // Pass the peer as the first argument + // || + // \/ + await someCallableFunction(peer1, arg1, arg2, arg3); + + + await peer1.stop(); + await peer2.stop(); +} + +// ... more application logic +``` + +It is possible to combine usage of the default peer with another one. Pay close attention to which peer you are calling the functions against. + +```typescript + // Registering handler for the default peerS + registerSomeService({ + handler: async (str) => { + console.log("Called against the default peer: " str); + }, + }); + + // Pay close attention to this + // || + // \/ + registerSomeService(someOtherPeer, { + handler: async (str) => { + console.log("Called against the peer named someOtherPeer: " str); + }, + }); +``` + +## Understanding the Aqua compiler output + +Aqua compiler emits TypeScript or JavaScript which in turn can be called from a js-based environment. The compiler outputs code for the following entities: + +1. Exported `func` declarations are turned into callable async functions +2. Exported `service` declarations are turned into functions which register callback handler in a typed manner +3. For every exported `service` the compiler generated it's interface under the name `{serviceName}Def` + +### Function definitions + +For every exported function definition in aqua the compiler generated two overloads. One accepting the `FluencePeer` instance as the first argument, and one without it. Otherwise arguments are the same and correspond to the arguments of aqua functions. The last argument is always an optional config object with the following properties: + +* `ttl`: Optional parameter which specify TTL (time to live) of particle with execution logic for the function + +The return type is always a promise of the aqua function return type. If the function does not return anything, the return type will be `Promise`. + +Consider the following example: + +``` +func myFunc(arg0: string, arg1: string): + -- implementation +``` + +The compiler will generate the following overloads: + +```typescript +export async function myFunc( + arg0: string, + arg1: string, + config?: { ttl?: number } +): Promise; + +export async function callMeBack( + peer: FluencePeer, + arg0: string, + arg1: string, + config?: { ttl?: number } +): Promise; +``` + +### Service definitions + +``` +service ServiceName: + -- service interface +``` + +For every exported `service` declaration the compiler will generate two entities: service interface under the name `{serviceName}Def` and a function named `register{serviceName}` with several overloads. First let's describe the most complete one using the following example: + +```typescript +export interface ServiceNameDef { + //... service function definitions +} + +export function registerServiceName( + peer: FluencePeer, + serviceId: string, + service: ServiceNameDef +): void; +``` + +* `peer` - the Fluence Peer instance where the handler should be registered. The peer can be omitted. In that case the default Fluence Peer will be used instead +* `serviceId` - the name of the service id. If the service was defined with the default service id in aqua code, this argument can be omitted. +* `service` - the handler for the service. + +Depending on whether or not the services was defined with the default id the number of overloads will be different. In the case it **is defined**, there would be four overloads: + +```typescript +// (1) +export function registerServiceName( + // + service: ServiceNameDef +): void; + +// (2) +export function registerServiceName( + serviceId: string, + service: ServiceNameDef +): void; + +// (3) +export function registerServiceName( + peer: FluencePeer, + service: ServiceNameDef +): void; + +// (4) +export function registerServiceName( + peer: FluencePeer, + serviceId: string, + service: ServiceNameDef +): void; +``` + +1. Uses default Fluence Peer and the default id taken from aqua definition +2. Uses default Fluence Peer and specifies the service id explicitly +3. The default id is taken from aqua definition. The peer is specified explicitly +4. Specifying both peer and the service id. + +If the default id **is not defined** in aqua code the overloads will exclude ones without service id: + +```typescript +// (1) +export function registerServiceName( + serviceId: string, + service: ServiceNameDef +): void; + +// (2) +export function registerServiceName( + peer: FluencePeer, + serviceId: string, + service: ServiceNameDef +): void; +``` + +1. Uses default Fluence Peer and specifies the service id explicitly +2. Specifying both peer and the service id. + +### Service interface + +The service interface type follows closely the definition in aqua code. It has the form of the object which keys correspond to the names of service members and the values are functions of the type translated from aqua definition (see Type convertion). For example, for the following aqua definition: + +``` +service Calc("calc"): + add(n: f32) + subtract(n: f32) + multiply(n: f32) + divide(n: f32) + reset() + getResult() -> f32 +``` + +The typescript interface will be: + +```typescript +export interface CalcDef { + add: (n: number, callParams: CallParams<"n">) => void | Promise; + subtract: (n: number, callParams: CallParams<"n">) => void | Promise; + multiply: (n: number, callParams: CallParams<"n">) => void | Promise; + divide: (n: number, callParams: CallParams<"n">) => void | Promise; + reset: (callParams: CallParams) => void | Promise; + getResult: (callParams: CallParams) => number | Promise; +} +``` + +`CallParams` will be described later in the section + +### Type conversion + +Basic types conversion is pretty much straightforward: + +* `string` is converted to `string` in typescript +* `bool` is converted to `boolean` in typescript +* All number types (`u8`, `u16`, `u32`, `u64`, `s8`, `s16`, `s32`, `s64`, `f32`, `f64`) are converted to `number` in typescript + +Arrow types translate to functions in typescript which have their arguments translated to typescript types. In addition to arguments defined in aqua, typescript counterparts have an additional argument for call params. For the majority of use cases this parameter is not needed and can be omitted. + +The type conversion works the same way for `service` and `func` definitions. For example a `func` with a callback might look like this: + +``` +func callMeBack(callback: string, i32 -> ()): + callback("hello, world", 42) +``` + +The type for `callback` argument will be: + +```typescript +callback: (arg0: string, arg1: number, callParams: CallParams<'arg0' | 'arg1'>) => void | Promise, +``` + +For the service definitions arguments are named (see calc example above) + +### Using asynchronous code in callbacks + +Typescript code generated by Aqua compiler has two scenarios where a user should specify a callback function. These are services and callback arguments of function in aqua. If you look at the return type of the generated function you will see a union of callback return type and the promise with this type, e.g `string | Promise`. Fluence-js supports both sync and async version of callbacks and figures out which one is used under the hood. The callback be made asynchronous like any other function in javascript: either return a Promise or mark it with async keyword to take advantage of async-await pattern. + +For example: + +``` +func withCallback(callback: string -> ()): + callback() + +service MyService: + callMe(string) +``` + +Here we are returning a promise + +```typescript +registerMyService({ + callMe: (arg): Promise => { + return new Promise((resolve) => { + setTimeout(() => { + console.log("I'm running 3 seconds after call"); + resolve(); + }, 3000); + }); + }, +}); +``` + +And here we are using async-await pattern + +```typescript +await withCallback(async (arg) => { + const data = await getStuffFromDatabase(arg); + console.log(""); +}); +``` + +### Call params and tetraplets + +Each service call is accompanied by additional information specific to Fluence Protocol. Including `initPeerId` - the peer which initiated the particle execution, particle signature and most importantly security tetraplets. All this data is contained inside the last `callParams` argument in every generated function definition. These data is passed to the handler on each function call can be used in the application. + +Tetraplets have the form of: + +```typescript +{ + argName0: SecurityTetraplet[], + argName1: SecurityTetraplet[], + // ... +} +``` + +To learn more about tetraplets and application security see [Security](../knowledge\_security.md) + +To see full specification of `CallParams` type see [API reference](6-reference.md) + +## Signing service + +Signing service is useful when you want to sign arbitrary data and pass it further inside a single aqua script. Signing service allows to restrict its usage for security reasons: e.g you don't want to sign anything except it comes from a trusted source. The aqua side API is the following: + +``` +data SignResult: + -- Was call successful or not + success: bool + -- Error message. Will be null if the call is successful + error: ?string + -- Signature as byte array. Will be null if the call is not successful + signature: ?[]u8 + +-- Available only on FluenceJS peers +-- The service can also be resolved by it's host peer id +service Sig("sig"): + -- Signs data with the private key used by signing service. + -- Depending on implementation the service might check call params to restrict usage for security reasons. + -- By default signing service is only allowed to be used on the same peer the particle was initiated + -- and accepts data only from the following sources: + -- trust-graph.get_trust_bytes + -- trust-graph.get_revocation_bytes + -- registry.get_key_bytes + -- registry.get_record_bytes + -- registry.get_host_record_bytes + -- Argument: data - byte array to sign + -- Returns: signature as SignResult structure + sign(data: []u8) -> SignResult + + -- Given the data and signature both as byte arrays, returns true if the signature is correct, false otherwise. + verify(signature: []u8, data: []u8) -> bool + + -- Gets service's public key. + get_pub_key() -> string +``` + +FluenceJS ships the service implementation as the JavaScript class: + +```typescript +/** + * Whether signing operation is allowed or not. + * Implemented as a predicate of CallParams. + */ +export type SigSecurityGuard = (params: CallParams<"data">) => boolean; + +export class Sig implements SigDef { + private _keyPair: KeyPair; + + constructor(keyPair: KeyPair) { + this._keyPair = keyPair; + } + + /** + * Security guard predicate + */ + securityGuard: SigSecurityGuard; + + /** + * Gets the public key of KeyPair. Required by aqua + */ + get_pub_key() { + // implementation ommited + } + + /** + * Signs the data using key pair's private key. Required by aqua + */ + async sign( + data: number[], + callParams: CallParams<"data"> + ): Promise { + // implementation ommited + } + + /** + * Verifies the signature. Required by aqua + */ + verify(signature: number[], data: number[]): Promise { + // implementation ommited + } +} +``` + +`securityGuard` specifies the way the `sign` method checks where the incoming data is allowed to be signed or not. It accepts one argument: call params (see "Call params and tetraplets") and must return either true or false. Any predicate can be specified. Also, FluenceJS is shipped with a set of useful predicates: + +```typescript +/** + * Only allow calls when tetraplet for 'data' argument satisfies the predicate + */ +export const allowTetraplet = (pred: (tetraplet: SecurityTetraplet) => boolean): SigSecurityGuard => {/*...*/}; + +/** + * Only allow data which comes from the specified serviceId and fnName + */ +export const allowServiceFn = (serviceId: string, fnName: string): SigSecurityGuard => {/*...*/}; + +/** + * Only allow data originated from the specified json_path + */ +export const allowExactJsonPath = (jsonPath: string): SigSecurityGuard => {/*...*/}; + +/** + * Only allow signing when particle is initiated at the specified peer + */ +export const allowOnlyParticleOriginatedAt = (peerId: PeerIdB58): SigSecurityGuard => {/*...*/}; + +/** + * Only allow signing when all of the predicates are satisfied. + * Useful for predicates reuse + */ +export const and = (...predicates: SigSecurityGuard[]): SigSecurityGuard => {/*...*/}; + +/** + * Only allow signing when any of the predicates are satisfied. + * Useful for predicates reuse + */ +export const or = (...predicates: SigSecurityGuard[]): SigSecurityGuard => {/*...*/}; +}; +``` + +Predicates as well as the `Sig` definition can be found in `@fluencelabs/fluence/dist/services` + +`Sig` class is accompanied by `registerSig` which allows registering different signing services with different keys. The mechanism is exactly the same as with ordinary aqua services e.g: + +```typescript +// create a key per from pk bytes +const customKeyPair = await KeyPair.fromEd25519SK(pkBytes); + +// create a signing service with the specific key pair +const customSig = new Sig(customKeyPair); + +// restrict sign usage to our needs +customSig.securityGuard = allowServiceFn("my_service", "my_function"); + +// register the service. Please note, that service id ("CustomSig" here) has to be specified. +registerSig("CustomSig", customSig); +``` + +for a [non-default peer](3\_in\_depth.md#using-multiple-peers-in-one-application), the instance has to be specified: + +```typescript +const peer = new FluencePeer(); +await peer.start(); + +// ... + +registerSig(peer, "CustomSig", customSig); +``` + +`FluencePeer` ships with the default signing service implementation, registered with id "Sig". Is is useful to work with TrustGraph and Registry API. The default implementation has the following restrictions on the `sign` operation: + +* Only allowed to be used on the same peer the particle was initiated +* Restricts data to following services: + * trust-graph.get\_trust\_bytes + * trust-graph.get\_revocation\_bytes + * registry.get\_key\_bytes + * registry.get\_record\_bytes + * Argument: data - byte array to sign + +The default signing service class can be accessed in the following way: + +```typescript +// for default FluencePeer: +const sig = Fluence.getPeer().getServices().sig; + +// for non-default FluencePeer: +// const peer = FluencePeer(); +// await peer.start() +const sig = peer.getServices().sig; + +// change securityGuard for the default service: +sig.securityGuard = or( + sig.securityGuard, + allowServiceFn("my_service", "my_function") +); +``` diff --git a/fluence-js/4_run_in_browser-1.md b/fluence-js/4_run_in_browser-1.md new file mode 100644 index 0000000..a36b31e --- /dev/null +++ b/fluence-js/4_run_in_browser-1.md @@ -0,0 +1,28 @@ +# Running app in browser + +You can use the Fluence JS with any framework (or even without it). The "fluence" part of the application is a collection of pure typescript\javascript functions which can be called withing any framework of your choosing. + +## Configuring application to run in browser + +To run application in browser you need to configure the server which hosts the application to serve two additional files: + +* `avm.wasm` is the execution file of AquaVM. The file is located in `@fluencelabs/avm` package +* `runnerScript.web.js` is the web worker script responsible for running AVM in background. The file is located in `@fluencelabs/avm-runner-background` package + +Fluence JS provides a utility script named `copy-avm-public` which locates described above files and copies them into the specified directory. For example if static files are served from the `public` directory you can run `copy-avm-public public` to copy files needed to run Fluence. It is recommended to put their names into `.gitignore` and run the script on every build to prevent possible inconsistencies with file versions. This can be done using npm's `postinstall` script: + +``` + ... + "scripts": { + "postinstall": "copy-avm-public public", + "start": "...", + .. + }, + ... +``` + +## Demo applications + +See the browser-example which demonstrate integrating Fluence with React: [github](https://github.com/fluencelabs/examples/tree/main/fluence-js-examples/browser-example) + +Also take a look at FluentPad. It is an example application written in React: [https://github.com/fluencelabs/fluent-pad](https://github.com/fluencelabs/fluent-pad) diff --git a/fluence-js/5_run_in_node.md b/fluence-js/5_run_in_node.md new file mode 100644 index 0000000..adcaba6 --- /dev/null +++ b/fluence-js/5_run_in_node.md @@ -0,0 +1,124 @@ +# Running app in nodejs + +It is easy to use Fluence JS in NodeJS applications. You can take full advantage of the javascript ecosystem and at the save time expose service to the Fluence Network. That makes is an excellent choice for quick prototyping of applications for Fluence Stack. + +## Configuring application to run in nodejs + +`@fluencelabs/fluence` delivers AquaVM wasm file through the npm package. No additional configuration is needed. + +## Calc app example + +Lets implement a very simple app which simulates a desk calculator. The calculator has internal memory and implements the following set of operations: + +* Add a number +* Subtract a number +* Multiply by a number +* Divide by a number +* Get the current memory state +* Reset the memory state to 0.0 + +First, let's write the service definition in aqua: + +``` +-- service definition +service Calc("calc"): + add(n: f32) + subtract(n: f32) + multiply(n: f32) + divide(n: f32) + reset() + getResult() -> f32 +``` + +Now write the implementation for this service in typescript: + +```typescript +import { Fluence } from "@fluencelabs/fluence"; +import { krasnodar } from "@fluencelabs/fluence-network-environment"; +import { registerCalc, CalcDef } from "./_aqua/calc"; + +class Calc implements CalcDef { + private _state: number = 0; + + add(n: number) { + this._state += n; + } + + subtract(n: number) { + this._state -= n; + } + + multiply(n: number) { + this._state *= n; + } + + divide(n: number) { + this._state /= n; + } + + reset() { + this._state = 0; + } + + getResult() { + return this._state; + } +} + +const keypress = async () => { + process.stdin.setRawMode(true); + return new Promise((resolve) => + process.stdin.once("data", () => { + process.stdin.setRawMode(false); + resolve(); + }) + ); +}; + +async function main() { + await Fluence.start({ + connectTo: krasnodar[0], + }); + + registerCalc(new Calc()); + + console.log("application started"); + console.log("peer id is: ", Fluence.getStatus().peerId); + console.log("relay is: ", Fluence.getStatus().relayPeerId); + console.log("press any key to continue"); + await keypress(); + + await Fluence.stop(); +} + +main(); +``` + +As you can see all the service logic has been implemented in typescript. You have full power of npm at your disposal. + +Now try running the application: + +```bash +> node -r ts-node/register src/index.ts + +application started +peer id is: 12D3KooWLBkw4Tz8bRoSriy5WEpHyWfU11jEK3b5yCa7FBRDRWH3 +relay is: 12D3KooWSD5PToNiLQwKDXsu8JSysCwUt8BVUJEqCHcDe7P5h45e +press any key to continue +``` + +And the service can be called from aqua. For example: + +```bash +const peer ?= "12D3KooWLBkw4Tz8bRoSriy5WEpHyWfU11jEK3b5yCa7FBRDRWH3" +const relay ?= "12D3KooWSD5PToNiLQwKDXsu8JSysCwUt8BVUJEqCHcDe7P5h45e" + +func demoCalculation() -> f32: + on peer via relay + Calc.add(10) + Calc.multiply(5) + Calc.subtract(8) + Calc.divide(6) + res <- Calc.getResult() + <- res +``` diff --git a/fluence-js/6-reference.md b/fluence-js/6-reference.md new file mode 100644 index 0000000..7bb3071 --- /dev/null +++ b/fluence-js/6-reference.md @@ -0,0 +1,3 @@ +# API reference + +API reference is available at [http://fluence.one/fluence-js/](http://fluence.one/fluence-js/) diff --git a/fluence-js/README.md b/fluence-js/README.md new file mode 100644 index 0000000..0ee635c --- /dev/null +++ b/fluence-js/README.md @@ -0,0 +1,13 @@ +# Fluence JS + +Fluence JS is an implementation of the Fluence protocol for JavaScript-based environments. It can connect browsers, Node.js applications, and so on to the Fluence p2p network. + +Similar to the [Rust Fluence Peer implementation](https://github.com/fluencelabs/fluence) it includes: + +* Peer-to-peer communication layer (via [js-libp2p](https://github.com/libp2p/js-libp2p)) +* [Aqua VM](https://github.com/fluencelabs/aquavm) +* Builtin services + +Fluence JS can call services and functions on the Fluence network, and expose new APIs to the p2p network directly from TypeScript and JavaScript. [Aqua language](https://github.com/fluencelabs/aqua) uses Fluence JS as a compilation target, and they are designed to [work in tandem](https://doc.fluence.dev/docs/js-sdk/3\_in\_depth#understanding-the-aqua-compiler-output). + +Fluence JS can be used with any framework of your choice (or even without frameworks). diff --git a/fluence-js/changelog.md b/fluence-js/changelog.md new file mode 100644 index 0000000..37e2de3 --- /dev/null +++ b/fluence-js/changelog.md @@ -0,0 +1,137 @@ +# Changelog + +Fluence JS versioning scheme is the following: `0.BREAKING.ENHANCING` + +* `0` shows that Fluence JS does not meet its vision yet, so API can change quickly +* `BREAKING` part is incremented for each breaking API change +* `ENHANCING` part is incremented for every fix and update which is compatible on API level + +## [0.20.2](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.1) – February 23, 2022 + +Fix copy-avm-public script: include marine-js.wasm to the copy process ([#134](https://github.com/fluencelabs/fluence-js/pull/134)) + +## [0.20.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.1) – February 21, 2022 + +Add missing builtins, Implement timestamps\_ms and timestamps\_sec ([#133](https://github.com/fluencelabs/fluence-js/pull/133)) + +## ​[0.20.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.1) – February 18, 2022 + +Switch to marine-web based AquaVM runner ([#132](https://github.com/fluencelabs/fluence-js/pull/132)) + +## [0.19.2](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.1) – February 17, 2022 + +Using polyfill for Buffer in browsers ([#129](https://github.com/fluencelabs/fluence-js/pull/129)) + +Implement additional builtins: array\_length, sha256\_string, concat\_strings ([#130](https://github.com/fluencelabs/fluence-js/pull/130)) + +Implement debug.stringify service ([#125](https://github.com/fluencelabs/fluence-js/pull/125)) + +Update avm version to 0.20.5 ([#131](https://github.com/fluencelabs/fluence-js/pull/131)) + +## [0.19.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.1) – February 4, 2022 + +Sig service redesign. ([#126](https://github.com/fluencelabs/fluence-js/pull/126)) + +## [0.19.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.19.0) – January 27, 2022 + +Update libp2p-related packages versions. Fix 'stream reset' error. ([#123](https://github.com/fluencelabs/fluence-js/pull/123)) + +## [0.18.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.18.0) – December 29, 2021 + +FluencePeer: Update AVM version to 0.20.0 ([#120](https://github.com/fluencelabs/fluence-js/pull/120)) + +## [0.17.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.17.1) – December 29, 2021 + +FluencePeer: Update AvmRunner to 0.1.2 (fix issue with incorrect baseUrl) ([#119](https://github.com/fluencelabs/fluence-js/pull/119)) + +## [0.17.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.17.0) – December 28, 2021 + +JS Peer does not embed AVM interpreter any more. Instead [AVM Runner](https://github.com/fluencelabs/avm-runner-background) is used to run AVM in background giving huge performance boost. This is a **breaking change**: all browser applications now not need to bundle `avm.wasm` file and the runner script. See [documentation](4\_run\_in\_browser-1.md) for more info. + +([#111](https://github.com/fluencelabs/fluence-js/pull/120)) + +## [0.16.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.16.0) – December 22, 2021 + +FluencePeer: Update AVM version to 0.19.3 ([#115](https://github.com/fluencelabs/fluence-js/pull/115)) + +## [0.15.4](https://github.com/fluencelabs/fluence-js/releases/tag/v0.15.4) – December 13, 2021 + +FluencePeer: Update AVM version to 0.17.7 ([#113](https://github.com/fluencelabs/fluence-js/pull/113)) + +## [0.15.3](https://github.com/fluencelabs/fluence-js/releases/tag/v0.15.3) – December 10, 2021 + +**FluencePeer:** + +* Add built-in service to sign data and verify signatures ([#110](https://github.com/fluencelabs/fluence-js/pull/110)) +* Update AVM version to 0.17.6 ([#112](https://github.com/fluencelabs/fluence-js/pull/112)) + +## [0.15.2](https://github.com/fluencelabs/fluence-js/releases/tag/v0.15.2) – November 30, 2021 + +Add particleId to error message when an aqua function times out ([#106](https://github.com/fluencelabs/fluence-js/pull/106)) + +## [0.15.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.15.0) – November 28, 2021 + +**FluencePeer:** + +* Fix timeout builtin error message ([#103](https://github.com/fluencelabs/fluence-js/pull/103)) + +**Compiler support:** + +Issue fixes for `registerService` function + +* Throwing error if registerService was called on a non-initialized peer. +* Fix issue with incorrect context being passed to class-based implementations of user services +* Fix typo in JSDoc + +([#104](https://github.com/fluencelabs/fluence-js/pull/104)) + +## [0.15.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.15.0) – November 17, 2021 + +**FluencePeer:** + +* Implement peer.timeout built-in function ([#101](https://github.com/fluencelabs/fluence-js/pull/101)) +* Update AVM: add support for the restriction operator ([#102](https://github.com/fluencelabs/fluence-js/pull/102)) + +## [0.14.3](https://github.com/fluencelabs/fluence-js/releases/tag/v0.14.3) – November 10, 2021 + +**FluencePeer:** + +* Extend error handling. Now aqua function calls fail early with the user-friendly error message ([#91](https://github.com/fluencelabs/fluence-js/pull/98)) + +**Compiler support:** + +* Define and export FnConfig interface ([#97](https://github.com/fluencelabs/fluence-js/pull/97)) +* Fix issue with incorrect TTL value in function calls config ([#100](https://github.com/fluencelabs/fluence-js/pull/100)) + +## [0.14.2](https://github.com/fluencelabs/fluence-js/releases/tag/v0.14.2) – October 21, 2021 + +FluencePeer: add option to specify default TTL for all new particles ([#91](https://github.com/fluencelabs/fluence-js/pull/91)) + +## [0.14.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.14.1) – October 20, 2021 + +Compiler support: fix issue with incorrect check for missing fields in service registration ([#90](https://github.com/fluencelabs/fluence-js/pull/90)) + +## [0.14.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.14.0) – October 20, 2021 + +Compiler support: added support for asynchronous code in service definitions and callback parameters of functions. ([#83](https://github.com/fluencelabs/fluence-js/pull/83)) + +## [0.12.1](https://github.com/fluencelabs/fluence-js/releases/tag/v0.12.1) – September 14, 2021 + +* `KeyPair`: add fromBytes, toEd25519PrivateKey ([#78](https://github.com/fluencelabs/fluence-js/pull/78)) + +## [0.12.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.13.0) – September 10, 2021 + +* The API to work with the default Fluence Peer has been put under the facade `Fluence`. Method `init` was renamed to `start` and `uninit` renamed to `stop`. `connectionStatus` migrated to `getStatus`. + +To migrate from 0.11.0 to 0.12.0 + +1. `import { Fluence } from "@fluencelabs/fluence"`; instead of `FluencePeer` +2. replace `Fluence.default` with just `Fluence` +3. replace `init` with `start` and `uninit` with `stop` +4. replace `connectionInfo()` with `getStatus()` + +([#72](https://github.com/fluencelabs/fluence-js/pull/72)) + +## [0.11.0](https://github.com/fluencelabs/fluence-js/releases/tag/v0.11.0) – September 08, 2021 + +* Update JS SDK api to the new version ([#61](https://github.com/fluencelabs/fluence-js/pull/61)) diff --git a/knowledge_aquamarine/README.md b/knowledge_aquamarine/README.md new file mode 100644 index 0000000..f957c44 --- /dev/null +++ b/knowledge_aquamarine/README.md @@ -0,0 +1,6 @@ +# Aquamarine + +Fluence's Aquamarine stack is comprised of Aqua and Marine. Aqua is a programming language and runtime environment for peer-to-peer workflows. Marine, on the other hand, is a general purpose runtime that executes hosted code on nodes, whereas Aqua facilitates the programming of workflows composed from hosted code. In combination, Aqua and Marine enable any distributed application. + +At the core of Aqua is the design ideal and idea to pair concurrent systems, and especially decentralized networks, with a programming and execution tool chain to avoid centralized bottlenecks commonly introduced with workflow engines and business rule engines. To this end, Aqua manages the communication and coordination between services, devices, and APIs without introducing a centralized gateway and can be used to express various distributed systems: from simple request-response models to comprehensive network consensus algorithms. + diff --git a/knowledge_aquamarine/hll.md b/knowledge_aquamarine/hll.md new file mode 100644 index 0000000..a549b67 --- /dev/null +++ b/knowledge_aquamarine/hll.md @@ -0,0 +1,5 @@ +# Aqua + + At the core of Fluence is the open-source language **Aqua** that allows for the programming of peer-to-peer scenarios separately from the computations on peers. + +Please see the[ Aqua book ](https://doc.fluence.dev/aqua-book/)for an introduction to the language and reference materials. diff --git a/knowledge_aquamarine/marine/README.md b/knowledge_aquamarine/marine/README.md new file mode 100644 index 0000000..35db668 --- /dev/null +++ b/knowledge_aquamarine/marine/README.md @@ -0,0 +1,12 @@ +# Marine + +[Marine](https://github.com/fluencelabs/marine) is a general purpose WebAssembly runtime favoring Wasm modules based on the [ECS](https://en.wikipedia.org/wiki/Entity\_component\_system) pattern or plugin architecture and uses Wasm [Interface Types](https://github.com/WebAssembly/interface-types/blob/main/proposals/interface-types/Explainer.md) (IT) to implement a [shared-nothing](https://en.wikipedia.org/wiki/Shared-nothing\_architecture) linking scheme. Fluence [nodes](https://github.com/fluencelabs/fluence) use Marine to host the Aqua VM and execute hosted Wasm services. + +The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) allows to hide the IT implementation details behind a handy procedural macro `[marine]` and provides the scaffolding for unit tests. + + + + + + + diff --git a/knowledge_aquamarine/marine/marine-cli.md b/knowledge_aquamarine/marine/marine-cli.md new file mode 100644 index 0000000..4904be9 --- /dev/null +++ b/knowledge_aquamarine/marine/marine-cli.md @@ -0,0 +1,33 @@ +# Marine CLI + +The [Marine command line tool](https://github.com/fluencelabs/marine) provides the project `marine build` functionality, analogous to `cargo build`, that results in the Rust code to be compiled to _wasm32-wasi_ modules. In addition, `marine` provides utilities to inspect Wasm modules, expose Wasm module attributes or manually set module properties. + +```rust +mbp16~(:|✔) % marine --help +Fluence Marine command line tool 0.6.7 +Fluence Labs + +USAGE: + marine [SUBCOMMAND] + +FLAGS: + -h, --help Prints help information + -V, --version Prints version information + +SUBCOMMANDS: + aqua Shows data types of provided module in a format suitable for Aqua + build Builds provided Rust project to Wasm + help Prints this message or the help of the given subcommand(s) + info Shows manifest and sdk version of the provided Wasm file + it Shows IT of the provided Wasm file + repl Starts Fluence application service REPL + set Sets interface types and version to the provided Wasm file +mbp16~(:|✔) % +``` + + + + + + + diff --git a/knowledge_aquamarine/marine/marine-repl.md b/knowledge_aquamarine/marine/marine-repl.md new file mode 100644 index 0000000..d42f3c5 --- /dev/null +++ b/knowledge_aquamarine/marine/marine-repl.md @@ -0,0 +1,33 @@ +# Marine REPL + +[`mrepl`](https://crates.io/crates/mrepl) is a command line tool to locally run a Marine instance to inspect, run, and test Wasm modules and service configurations. We can run the Repl either with `mrepl` or `marine repl` + +``` +mbp16~(:|✔) % mrepl +Welcome to the Marine REPL (version 0.7.2) +Minimal supported versions + sdk: 0.6.0 + interface-types: 0.20.0 + +New version is available! 0.7.2 -> 0.7.4 +To update run: cargo +nightly install mrepl --force + +app service was created with service id = d81a4de5-55c3-4cb7-935c-3d5c6851320d +elapsed time 486.234µs + +1> help +Commands: + +n/new [config_path] create a new service (current will be removed) +l/load load a new Wasm module +u/unload unload a Wasm module +c/call [args] call function with given name from given module +i/interface print public interface of all loaded modules +e/envs print environment variables of a module +f/fs print filesystem state of a module +h/help print this message +q/quit/Ctrl-C exit + +2> +``` + diff --git a/knowledge_aquamarine/marine/marine-rs-sdk.md b/knowledge_aquamarine/marine/marine-rs-sdk.md new file mode 100644 index 0000000..6426c74 --- /dev/null +++ b/knowledge_aquamarine/marine/marine-rs-sdk.md @@ -0,0 +1,810 @@ +# Marine Rust SDK + +The [marine-rs-sdk](https://github.com/fluencelabs/marine-rs-sdk) empowers developers to create services suitable for hosting on peers of the peer-to-peer network. Such services are constructed from one or more Wasm modules, which each are the result of Rust code compiled to the wasm32-wasi compile target, executable by the Marine runtime. + +### API + +The procedural macros `[marine]` and `[marine_test]` are the two primary features provided by the SDK. The `[marine]` macro can be applied to a function, external block or structure. The `[marine_test]` macro, on the other hand, allows the use of the familiar `cargo test` to execute tests over the actual Wasm module generated from the service code. + +#### Function Export + +Applying the `[marine]` macro to a function results in its export, which means that it can be called from other modules or AIR scripts. For the function to be compatible with this macro, its arguments must be of the `ftype`, which is defined as follows: + +`ftype` = `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String`\ +`ftype` = `ftype` | `Vec`<`ftype`>\ +`ftype` = `ftype` | `Record`<`ftype`> + +In other words, the arguments must be one of the types listed below: + +* one of the following Rust basic types: `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String` +* a vector of elements of the above types +* a vector composed of vectors of the above type, where recursion is acceptable, e.g. the type `Vec>>` is permissible +* a record, where all fields are of the basic Rust types +* a record, where all fields are of any above types or other records\ + + +The return type of a function must follow the same rules, but currently only one return type is possible. + +See the example below of an exposed function with a complex type signature and return value: + +```rust +// export TestRecord as a public data structure bound by +// the IT type constraints +#[marine] +pub struct TestRecord { + pub field_0: i32, + pub field_1: Vec>, +} + +// export foo as a public function bound by the +// IT type contraints +#[marine] # +pub fn foo(arg_1: Vec>>>, arg_2: String) -> Vec>>> { + unimplemented!() +} +``` + + + +{% hint style="info" %} +Function Export Requirements + +* wrap a target function with the `[marine]` macro +* function arguments must by of `ftype` +* the function return type also must be of `ftype` +{% endhint %} + +#### Function Import + +The `[marine]` macro can also wrap an [`extern` block](https://doc.rust-lang.org/std/keyword.extern.html). In this case, all functions declared in it are considered imported functions. If there are imported functions in some module, say, module A, then: + +* There should be another module, module B, that exports the same functions. The name of module B is indicated in the `link` macro (see examples below). +* Module B should be loaded to `Marine` by the moment the loading of module A starts. Module A cannot be loaded if at least one imported function is absent in `Marine`. + +See the examples below for wrapped `extern` block usage: + +{% tabs %} +{% tab title="Example 1" %} +```rust +#[marine] +pub struct TestRecord { + pub field_0: i32, + pub field_1: Vec>, +} + +// wrap the extern block with the marine macro to expose the function +// as an import to the Marine VM +#[marine] +#[link(wasm_import_module = "some_module")] +extern "C" { + pub fn foo(arg: Vec>>>, arg_2: String) -> Vec>>>; +} +``` +{% endtab %} + +{% tab title="Example 2" %} +```rust +[marine] +#[link(wasm_import_module = "some_module")] +extern "C" { + pub fn foo(arg: Vec>>>) -> Vec>>>; +} +``` +{% endtab %} +{% endtabs %} + + + +{% hint style="info" %} + + +#### Function import requirements + +* wrap an extern block with the function(s) to be imported with the `[marine]` macro +* all function(s) arguments must be of the `ftype` type +* the return type of the function(s) must be `ftype` +{% endhint %} + +#### + +#### Structures + +Finally, the `[marine]` macro can wrap a `struct` making possible to use it as a function argument or return type. Note that + +* only macro-wrapped structures can be used as function arguments and return types +* all fields of the wrapped structure must be public and of the `ftype`. +* it is possible to have inner records in the macro-wrapped structure and to import wrapped structs from other crates + +See the example below for wrapping `struct`: + +{% tabs %} +{% tab title="Example 1" %} +```rust +#[marine] +pub struct TestRecord0 { + pub field_0: i32, +} + +#[marine] +pub struct TestRecord1 { + pub field_0: i32, + pub field_1: String, + pub field_2: Vec, + pub test_record_0: TestRecord0, +} + +#[marine] +pub struct TestRecord2 { + pub test_record_0: TestRecord0, + pub test_record_1: TestRecord1, +} + +#[marine] +fn foo(mut test_record: TestRecord2) -> TestRecord2 { unimplemented!(); } +``` +{% endtab %} + +{% tab title="Example 2" %} +```rust +#[fce] +pub struct TestRecord0 { + pub field_0: i32, +} + +#[fce] +pub struct TestRecord1 { + pub field_0: i32, + pub field_1: String, + pub field_2: Vec, + pub test_record_0: TestRecord0, +} + +#[fce] +pub struct TestRecord2 { + pub test_record_0: TestRecord0, + pub test_record_1: TestRecord1, +} + +#[fce] +#[link(wasm_import_module = "some_module")] +extern "C" { + fn foo(mut test_record: TestRecord2) -> TestRecord2; +} +``` +{% endtab %} + +{% tab title="Example 3" %} +```rust +mod data_crate { + use fluence::marine; + #[marine] + pub struct Data { + pub name: String, + pub data: f64, + } +} + +use data_crate::Data; +use fluence::marine; + +fn main() {} + +#[marine] +fn some_function() -> Data { + Data { + name: "example".into(), + data: 1.0, + } +} + +``` +{% endtab %} +{% endtabs %} + + + +{% hint style="info" %} + + +> #### Structure passing requirements +> +> * wrap a structure with the `[marine]` macro +> * all structure fields must be of the `ftype` +> * the structure must be pointed to without preceding package import in a function signature, i.e`StructureName` but not `package_name::module_name::StructureName` +> * wrapped structs can be imported from crates +{% endhint %} + +#### + +#### Call Parameters + +There is a special API function `fluence::get_call_parameters()` that returns an instance of the [`CallParameters`](https://github.com/fluencelabs/marine-rs-sdk/blob/master/src/call\_parameters.rs#L35) structure defined as follows: + +```rust +pub struct CallParameters { + /// Peer id of the AIR script initiator. + pub init_peer_id: String, + + /// Id of the current service. + pub service_id: String, + + /// Id of the service creator. + pub service_creator_peer_id: String, + + /// Id of the host which run this service. + pub host_id: String, + + /// Id of the particle which execution resulted a call this service. + pub particle_id: String, + + /// Security tetraplets which described origin of the arguments. + pub tetraplets: Vec>, +} +``` + +CallParameters are especially useful in constructing authentication services: + +``` +// auth.rs +use fluence::{marine, CallParameters}; +use::marine; + +pub fn is_owner() -> bool { + let meta = marine::get_call_parameters(); + let caller = meta.init_peer_id; + let owner = meta.service_creator_peer_id; + + caller == owner +} + +#[marine] +pub fn am_i_owner() -> bool { + is_owner() +} +``` + +#### + +#### MountedBinaryResult + +Due to the inherent limitations of Wasm modules, such as a lack of sockets, it may be necessary for a module to interact with its host to bridge such gaps, e.g. use a https transport provider like _curl_. In order for a Wasm module to use a host's _curl_ capabilities, we need to provide access to the binary, which at the code level is achieved through the Rust `extern` block: + +```rust +// Importing a linked binary, curl, to a Wasm module +#![allow(improper_ctypes)] + +use fluence::marine; +use fluence::module_manifest; +use fluence::MountedBinaryResult; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub fn curl_request(curl_cmd: Vec) -> MountedBinaryResult { + let response = curl(curl_cmd); + response +} + +#[marine] +#[link(wasm_import_module = "host")] +extern "C" { + fn curl(cmd: Vec) -> MountedBinaryResult; +} +``` + +The above code creates a "curl adapter", i.e., a Wasm module that allows other Wasm modules to use the the `curl_request` function, which calls the imported _curl_ binary in this case, to make http calls. Please note that we are wrapping the `extern` block with the `[marine]`macro and introduce a Marine-native data structure [`MountedBinaryResult`](https://github.com/fluencelabs/marine/blob/master/examples/url-downloader/curl\_adapter/src/main.rs) as the linked-function return value. + +Please not that if you want to use `curl_request` with testing, see below, the curl call needs to be marked unsafe, e.g.: + +```rust + let response = unsafe { curl(curl_cmd) }; +``` + +since cargo does not access to the marine macro to handle unsafe. + +MountedBinaryResult itself is a Marine-compatible struct containing a binary's return process code, error string and stdout and stderr as byte arrays: + +```rust +#[marine] +#[derive(Clone, PartialEq, Default, Eq, Debug, Serialize, Deserialize)] +pub struct MountedBinaryResult { + /// Return process exit code or host execution error code, where SUCCESS_CODE means success. + pub ret_code: i32, + + /// Contains the string representation of an error, if ret_code != SUCCESS_CODE. + pub error: String, + + /// The data that the process wrote to stdout. + pub stdout: Vec, + + /// The data that the process wrote to stderr. + pub stderr: Vec, +} + +``` + +MountedBinaryResult then can be used on a variety of match or conditional tests. + +#### Testing + +Since we are compiling to a wasm32-wasi target with `ftype` constrains, the basic `cargo test` is not all that useful or even usable for our purposes. To alleviate that limitation, Fluence has introduced the [`[marine-test]` macro ](https://github.com/fluencelabs/marine-rs-sdk-test/tree/master/crates/marine-test-macro)that does a lot of the heavy lifting to allow developers to use `cargo test` as intended. That is, `[marine-test]` macro generates the necessary code to call Marine, one instance per test function, based on the Wasm module and associated configuration file so that the actual test function is run against the Wasm module not the native code. + +To use the `[marine-test]` macro please add `marine-rs-sdk-test` crate to the `[dev-dependencies]` section of `Config.toml`: + +```rust +[dev-dependencies] +marine-rs-sdk-test = "0.2.0" +``` + + Let's have a look at an implementation example: + +```rust +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub fn greeting(name: String) -> String { // 1 + format!("Hi, {}", name) +} + +#[cfg(test)] +mod tests { + use marine_rs_sdk_test::marine_test; // 2 + + #[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")] // 3 + fn empty_string(greeting: marine_test_env::greeting::ModuleInterface) { + let actual = greeting.greeting(String::new()); // 4 + assert_eq!(actual, "Hi, "); + } + + #[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")] + fn non_empty_string(greeting: marine_test_env::greeting::ModuleInterface) { + let actual = greeting.greeting("name".to_string()); + assert_eq!(actual, "Hi, name"); + } +} +``` + +1. We wrap a basic _greeting_ function with the `[marine]` macro which results in the greeting.wasm module +2. We wrap our tests as usual with `[cfg(test)]` and import the marine _test crate._ Do **not** import _super_ or the _local crate_. +3. Instead, we apply the `[marine_test]` macro to each of the test functions by providing the path to the config file, e.g., Config.toml, and the directory containing the Wasm module we obtained after compiling our project with `marine build`. Moreover, we add the type of the test as an argument in the function signature. It is imperative that project build precedes the test runner otherwise the required Wasm file will be missing. +4. The target of our tests is the `pub fn greeting` function. Since we are calling the function from the Wasm module we must prefix the function name with the module namespace -- `greeting` in this example case as specified in the function argument. + +Now that we have our Wasm module and tests in place, we can proceed with `cargo test --release.` Note that using the `release`flag vastly improves the import speed of the necessary Wasm modules. + +The same macro also allows testing data flow between multiple services, so you do not need to deploy anything to the network and write an Aqua app just for basic testing. Let's look at an example: + +{% tabs %} +{% tab title="test.rs" %} +```rust +fn main() {} + +#[cfg(test)] +mod tests { + use marine_rs_sdk_test::marine_test; + #[marine_test( // 1 + producer( + config_path = "../producer/Config.toml", + modules_dir = "../producer/artifacts" + ), + consumer( + config_path = "../consumer/Config.toml", + modules_dir = "../consumer/artifacts" + ) + )] + fn test() { + let mut producer = marine_test_env::producer::ServiceInterface::new(); // 2 + let mut consumer = marine_test_env::consumer::ServiceInterface::new(); + let input = marine_test_env::producer::Input { // 3 + first_name: String::from("John"), + last_name: String::from("Doe"), + }; + let data = producer.produce(input); // 4 + let result = consumer.consume(data); + assert_eq!(result, "John Doe") + } +} + +``` +{% endtab %} + +{% tab title="producer.rs" %} +```rust +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub struct Data { + pub name: String, +} + +#[marine] +pub struct Input { + pub first_name: String, + pub last_name: String, +} + +#[marine] +pub fn produce(data: Input) -> Data { + Data { + name: format!("{} {}", data.first_name, data.last_name), + } +} + +``` +{% endtab %} + +{% tab title="consumer.rs" %} +```rust +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub struct Data { + pub name: String, +} + +#[marine] +pub fn consume(data: Data) -> String { + data.name +} + +``` +{% endtab %} + +{% tab title="test_on_mod.rs" %} +```rust +fn main() {} + +#[cfg(test)] +#[marine_rs_sdk_test::marine_test( + producer( + config_path = "../producer/Config.toml", + modules_dir = "../producer/artifacts" + ), + consumer( + config_path = "../consumer/Config.toml", + modules_dir = "../consumer/artifacts" + ) +)] +mod tests_on_mod { + #[test] + fn test() { + let mut producer = marine_test_env::producer::ServiceInterface::new(); + let mut consumer = marine_test_env::consumer::ServiceInterface::new(); + let input = marine_test_env::producer::Input { + first_name: String::from("John"), + last_name: String::from("Doe"), + }; + let data = producer.produce(input); + let result = consumer.consume(data); + assert_eq!(result, "John Doe") + } +} + +``` +{% endtab %} +{% endtabs %} + +1. We wrap the `test` function with the `marine_test` macro by providing named service configurations with module locations. Based on its arguments the macro defines a `marine_test_env` module with an interface to the services. +2. We create new services. Each `ServiceInterface::new()` runs a new marine runtime with the service. +3. We prepare data to pass to a service using structure definition from `marine_test_env`. The macro finds all structures used in the service interface functions and defines them in the corresponding submodule of `marine_test_env` . +4. We call a service function through the `ServiceInterface` object. +5. It is possible to use the result of one service call as an argument for a different service call. The interface types with the same structure have the same rust type in `marine_test_env`. + +In the `test_on_mod.rs` tab we can see another option — applying `marine_test` to a `mod`. The macro just defines the `marine_test_env` at the beginning of the module and then it can be used as usual everywhere inside the module. + +The full example is [here](https://github.com/fluencelabs/marine/tree/master/examples/multiservice\_marine\_test). + +The `marine_test` macro also gives access to the interface of internal modules which may be useful for setting up a test environment. This feature is designed to be used in situations when it is simpler to set up a service for a test through internal functions than through the service interface. To illustrate this feature we have rewritten the previous example: + +```rust +fn main() {} + +#[cfg(test)] +mod tests { + use marine_rs_sdk_test::marine_test; + #[marine_test( + producer( + config_path = "../producer/Config.toml", + modules_dir = "../producer/artifacts" + ), + consumer( + config_path = "../consumer/Config.toml", + modules_dir = "../consumer/artifacts" + ) + )] + fn test() { + let mut producer = marine_test_env::producer::ServiceInterface::new(); + let mut consumer = marine_test_env::consumer::ServiceInterface::new(); + let input = marine_test_env::producer::modules::producer::Input { // 1 + first_name: String::from("John"), + last_name: String::from("Doe"), + }; + let data = producer.modules.producer.produce(input); // 2 + let consumer_data = marine_test_env::consumer::modules::consumer::Data { name: data.name } // 3; + let result = consumer.modules.consumer.consume(consumer_data); + assert_eq!(result, "John Doe") + } +} + +``` + +1. We access the internal service interface to construct an interface structure. To do so, we use the following pattern: `marine_test_env::$service_name::modules::$module_name::$structure_name`. +2. We access the internal service interface and directly call a function from one of the modules of this service. To do so, we use the following pattern: `$service_object.modules.$module_name.$function_name` . +3. In the previous example, the same interface types had the same rust types. It is limited when using internal modules: the property is true only when structures are defined in internal modules of one service, or when structures are defined in service interfaces of different services. So, we need to construct the proper type to pass data to the internals of another module. + +Testing sdk also has the interface for [Cargo build scripts](https://doc.rust-lang.org/cargo/reference/build-scripts.html). Some IDEs can analyze files generated in build scripts, providing code completion and error highlighting for code generated in build scripts. But using it may be a little bit tricky because build scripts are not designed for such things. + +Actions required to set up IDE: + +CLion: + +* in the `Help -> Actions -> Experimental Futures` enable `org.rust.cargo.evaluate.build.scripts` +* refresh cargo project in order to update generated code: change `Cargo.toml` and build from IDE or press `Refresh Cargo Project` in Cargo tab. + +VS Code: + +* install `rust-analyzer` plugin +* change `Cargo.toml` to let plugin update code from generated files + +The update will not work instantly: you should build service to wasm, and then trigger `build.rs` run again, but for the native target. + +And here is the example of using this: + +{% tabs %} +{% tab title="build.rs" %} +```rust +use marine_rs_sdk_test::generate_marine_test_env; +use marine_rs_sdk_test::ServiceDescription; +fn main() { + let services = vec![ // <- 1 + ("greeting".to_string(), ServiceDescription { + config_path: "Config.toml".to_string(), + modules_dir: Some("artifacts".to_string()), + }) + ]; + + let target = std::env::var("CARGO_CFG_TARGET_ARCH").unwrap(); + if target != "wasm32" { // <- 2 + generate_marine_test_env(services, "marine_test_env.rs", file!()); // <- 3 + } + + println!("cargo:rerun-if-changed=src/main.rs"); // <- 4 +} +``` +{% endtab %} + +{% tab title="src/main.rs" %} +```rust +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub fn greeting(name: String) -> String { + format!("Hi, {}", name) +} + +#[cfg(test)] +mod built_tests { + marine_rs_sdk_test::include_test_env!("/marine_test_env.rs"); // <- 4 + #[test] + fn non_empty_string() { + let mut greeting = marine_test_env::greeting::ServiceInterface::new(); + let actual = greeting.greeting("name".to_string()); + assert_eq!(actual, "Hi, name"); + } +} +``` +{% endtab %} + +{% tab title="Cargo.toml" %} +```toml +[package] +name = "wasm-build-rs" +version = "0.1.0" +authors = ["Fluence Labs"] +description = "The greeting module for the Fluence network" +repository = "https://github.com/fluencelabs/marine/tree/master/examples/build_rs" +edition = "2018" +publish = false + +[[bin]] +name = "build_rs_test" +path = "src/main.rs" + +[dependencies] +marine-rs-sdk = "0.6.11" + +[dev-dependencies] +marine-rs-sdk-test = "0.4.0" + +[build-dependencies] +marine-rs-sdk-test = "0.4.0" # <- 5 + +``` +{% endtab %} +{% endtabs %} + +1. We create a vector of pairs (service\_name, service\_description) to pass to the generator. The structure is the same with multi-service `marine_test`. +2. We check if we build for a non-wasm target. As we build this marine service only for `wasm32-wasi` and tests are built for native target, we can generate `marine_test_env` only for tests. This is needed because our generator depends on the artifacts from `wasm32-wasi` build. We suggest using a separate crate for using build scripts for testing purposes. It is here for simplicity. +3. We pass our services, a name of the file to generate, and a path to the build script file to the `marine_test_env` generator. Just always use `file!()` for the last argument. The generated file will be in the directory specified by the `OUT_DIR` variable, which is set by cargo. The build script must not change any files outside of this directory. +4. We set up condition to re-run the build script. It must be customized, a good choice is to re-run the build script when .wasm files or `Config.toml` are changed. +5. We import the generated file with the `marine_test_env` definition to the project. +6. Do not forget to add `marine-rs-sdk-test` to the `build-dependencies` section of `Cargo.toml`. + +### Features + +The SDK has two useful features: `logger` and `debug`. + +#### Logger + +Using logging is a simple way to assist in debugging without deploying the module(s) to a peer-to-peer network node. The `logger` feature allows you to use a special logger that is based at the top of the [log](https://crates.io/crates/log) crate. + +To enable logging please specify the `logger` feature of the Fluence SDK in `Config.toml` and add the [log](https://docs.rs/log/0.4.11/log/) crate: + +```rust +[dependencies] +log = "0.4.14" +fluence = { version = "0.6.9", features = ["logger"] } +``` + +The logger should be initialized before its usage. This can be done in the `main` function as shown in the example below. + +```rust +use fluence::marine; +use fluence::WasmLogger; + +pub fn main() { + WasmLogger::new() + // with_log_level can be skipped, + // logger will be initialized with Info level in this case. + .with_log_level(log::Level::Info) + .build() + .unwrap(); +} + +#[marine] +pub fn put(name: String, file_content: Vec) -> String { + log::info!("put called with file name {}", file_name); + unimplemented!() +} +``` + +In addition to the standard log creation features, the Fluence logger allows the so-called target map to be configured during the initialization step. This allows you to filter out logs by `logging_mask`, which can be set for each module in the service configuration. Let's consider an example: + +```rust +const TARGET_MAP: [(&str, i64); 4] = [ + ("instruction", 1 << 1), + ("data_cache", 1 << 2), + ("next_peer_pks", 1 << 3), + ("subtree_complete", 1 << 4), +]; + +pub fn main() { + use std::collections::HashMap; + use std::iter::FromIterator; + + let target_map = HashMap::from_iter(TARGET_MAP.iter().cloned()); + + fluence::WasmLogger::new() + .with_target_map(target_map) + .build() + .unwrap(); +} + +#[marine] +pub fn foo() { + log::info!(target: "instruction", "this will print if (logging_mask & 1) != 0"); + log::info!(target: "data_cache", "this will print if (logging_mask & 2) != 0"); +} +``` + +Here, an array called `TARGET_MAP` is defined and provided to a logger in the `main` function of a module. Each entry of this array contains a string (a target) and a number that represents the bit position in the 64-bit mask `logging_mask`. When you write a log message request `log::info!`, its target must coincide with one of the strings (the targets) defined in the `TARGET_MAP` array. The log will be printed if `logging_mask` for the module has the corresponding target bit set. + +{% hint style="info" %} +REPL also uses the log crate to print logs from Wasm modules. Log messages will be printed if`RUST_LOG` environment variable is specified. +{% endhint %} + +#### Debug + +The application of the second feature is limited to obtaining some of the internal details of the IT execution. Normally, this feature should not be used by a backend developer. Here you can see example of such details for the greeting service compiled with the `debug` feature: + +```bash +# running the greeting service compiled with debug feature +~ $ RUST_LOG="info" fce-repl Config.toml +Welcome to the Fluence FaaS REPL +app service's created with service id = e5cfa463-ff50-4996-98d8-4eced5ac5bb9 +elapsed time 40.694769ms + +1> call greeting greeting "user" +[greeting] sdk.allocate: 4 +[greeting] sdk.set_result_ptr: 1114240 +[greeting] sdk.set_result_size: 8 +[greeting] sdk.get_result_ptr, returns 1114240 +[greeting] sdk.get_result_size, returns 8 +[greeting] sdk.get_result_ptr, returns 1114240 +[greeting] sdk.get_result_size, returns 8 +[greeting] sdk.deallocate: 0x110080 8 + +result: String("Hi, user") + elapsed time: 222.675µs +``` + +The most important information these logs relates to the `allocate`/`deallocate` function calls. The `sdk.allocate: 4` line corresponds to passing the 4-byte `user` string to the Wasm module, with the memory allocated inside the module and the string is copied there. Whereas `sdk.deallocate: 0x110080 8` refers to passing the 8-byte resulting string `Hi, user` to the host side. Since all arguments and results are passed by value, `deallocate` is called to delete unnecessary memory inside the Wasm module. + +#### Module Manifest + +The `module_manifest!` macro embeds the Interface Type (IT), SDK and Rust project version as well as additional project and build information into Wasm module. For the macro to be usable, it needs to be imported and initialized in the _main.rs_ file: + +```rust +// main.rs +use fluence::marine; +use fluence::module_manifest; // import manifest macro + +module_manifest!(); // initialize macro + +fn main() {} + +#[marine] +fn some_function() {} +} +``` + +Using the Marine CLI, we can inspect a module's manifest with `marine info`: + +```rust +mbp16~/localdev/struct-exp(main|…) % marine info -i artifacts/*.wasm +it version: 0.20.1 +sdk version: 0.6.0 +authors: The Fluence Team +version: 0.1.0 +description: foo-wasm, a Marine wasi module +repository: +build time: 2021-06-11 21:08:59.855352 +00:00 UTC +``` + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/knowledge_security.md b/knowledge_security.md new file mode 100644 index 0000000..3457ee6 --- /dev/null +++ b/knowledge_security.md @@ -0,0 +1,158 @@ +# Security + +In the Fluence network, an application consists of one or more services composed with Aquamarine. Services expose actions in the form of functions, and these actions may require authorization. In this section, we present the concept of Security Tetraplets: Verifiable origins of the function arguments in form of (peer\_id, service\_id, function\_name, data\_getter) tetraplets. This concept enables the secure composition of function calls with AIR scripts. + +## Decouple the Authorization Service + +Aquamarine, as a composability medium, needs to take care of many aspects of security to enable composing services of different vendors in a safe way. Let's consider the example of authorization service – a service that verifies permission: + +``` +// Pseudocode of a service interface +service Auth: + // Works only for the service creator + def grant_permission(to_peer: PeerId) + def check_permission(): bool +``` + +The service contains all the data necessary to check that permission was granted to a given peer. That is, we have authentication and authorization logic. + +Consider a simple Blog service with an authorization argument for writes, i.e. adding posts. + +``` +service Blog: + def add_post(text: string, is_permitted: bool) + def list_posts(): Post[] +``` + +By decoupling the storage of posts from the user and permissions model, we add a lot of flexibility to our Blog service. Not only can we use it for, say,both personal and corporate blogs but also as a building block for more complex social interactions. Just remember, the blog service itself doesn't care about security guards, it just stores posts, that's all. + +Let's write an AIR script that checks permissions and adds a new post where authNode is the peer running the auth service, authSrvId and blogNde is the peer hosting the blog service: + +``` +;; Script is slightly simplified for better readability +;; Setting data is omitted +(seq + (call authNode (authSrvId "check_permission") [] token) + (call blogNode (blogSrvId "add_post") [text token]) +) +``` + +This is what we want to have but now let's see if we can poke holes in our security. + +### First Try: Person in the Middle (PITM/MITM) Attack + +In case check\_permission() returns false, a PITM attacker intercepts the outgoing network package, takes the function output and attempts tp replace false with true. This attempt fails, however, as in Aquamarine every peer's ID is derived from its public key and every response is signed with the corresponding private key: + +```rust +let resp_signature = sign(particle.signature, srvId, fnName, argsHash, responseHash) +``` + +Since only the private key holders can verifiably sign the output of a function call. Hence, attackers' attempts to change a function output or replay the output of a function call from another particle leads to particle rejection on blogNode. + +### Second Try: Using The Wrong Service + +Consider the following script where we set the token to true so that add\_post may assume that permission was actually given. + +``` +(seq + (call %init_peer_id% ("" "get_true") [] token) + (call blogNode (blogSrvId "add_post") [text token]) +) +``` + +How could we overcome this potential breach? On blog service host, blogNode, the entire AIR script execution flow is verified. That is, the Aquamarine interpreter visits each instruction and checks whether the particle's data has the result of the execution of this instruction and, if it does, checks that it was done by the expected peer, service, function and with the expected arguments. This is verified by the argsHash signed within _resp\_signature_. So when the token is set to a value inside the Aquamarine interpreter, we know the origin of this data: a triplet of peerId, serviceId, functionName. + +In our case, the data triplet is %init\_peer\_id%, "", "get\_true" but we expect authNode, authSrvId, "check\_permission" with some known constants for authNode, authSrvId as we know where we deployed the service. As the add\_post function checks this triplet along with the token argument, and will reject the particle. Hence, we failed to trick the system by fakking the argument's origin as only the Auth service is considered a valid source of truth for authorization tokens. + +Our attack got thwarted again but we have a few more tricks up our sleeves. + +### Third Try: Using The Wrong Piece Of Data + +Let's make a more sophisticated AuthStatus service that provides more data associated with the current peer id: + +``` +struct Status: + is_admin: bool + is_misbehaving: bool + +service AuthStatus: + def get_status(): Status +``` + +If this peer misbehaves, we set a special flag as follows: + +``` +;; Script is slightly simplified for better readability +;; Setting data is omitted +(seq + (call authNode (authSrvId "get_status") [] status) + (call blogNode (blogSrvId "add_post") [text status.$.is_admin]) +) +``` + +So we pass an _is\_admin_ flag to the blogNode, as we now have a permissioned blog and all is well. Maybe. + +The problem is that we can also pass the _is\_misbehaving_ flag to fake admin permissions and add a post. Consider other possible scenarios, where, for example, you could have a role in the status, as well as a nickname, and you need to distinguish the two, even though both are strings. + +Recall that the origin of the result is stated with three values _peerId_, _serviceId_, _functionName_, while the origin of the argument is extended with one more attribute: the data getter. This forms a structure of four fields – the **tetraplet**: + +``` + struct SecurityTetraplet: + peer_id: string + service_id: string + fn_name: string + getter: string +``` + +The Aquamarine interpreter provides this tetraplet along with each argument during the function call, which are checked by the service if deemed necessary. In fact, tetraplets are present for every argument as a vector of vectors of tetraplets: + +``` +pub tetraplets: Vec> +``` + +which is possible due to the use of accumulators in AIR and produced with the fold instruction. Usually, you don't need to care about them, and only the first, i.e. origin, tetraplet is set. + +## Limitations Of The Authentication Approach + +This strategy positions that only arguments should affect function behavior by decoupling the service from the AIR script and its input data. That is, the (public) service API is safe only by relying on exogenous permissions checking ascertaining that the security invariants have no access to the AIR script or input data. + +### Only Arguments Affect The Function Execution + +This API cannot be used safely: + +``` +service WrongAuth: + def get_status_or_fail() // does not return if not authorized +``` + +as _WrongAuth_ service cannot be used to provide the expected checks: + +``` +(seq + (call authNode (authSrv "get_status_or_fail") []) ;; no return + (call blogNode (blogSrv "add_post") [text]) ;; no data +) +``` + +In the above script, if _get\_status\_or\_fail_ fails, _add\_post_ never executes. But nothing prevents a user from calling _add\_post_ directly, so this design cannot be considered secure. That's why there must be an output from a security service to be provided as an argument later. + +### Only Direct Dependencies Are Taken Into Account + +Consider the modified WrongAuth, which takes the peer id as an argument: + +``` +service WrongAuth: + def get_status(peer_id) // Status of the given peer +``` + +In this case, a tetraplet can easily be verified that the input arguments are not compromised. However, what data is it? As arguments of _get\_status_ function are not a part of a tetraplet, we can't check that the right peer\_id was provided to the function. So from a design perspective, it is preferable for _get\_status_ to not have arguments, so that input cannot be altered. + +What if we want to make the system secure in terms of tracking the data origin by taking the arguments into account? In this case, the verifier function _add\_post_ needs to know not only the name of the provider but also its structure, i.e., what inputs it has and, even worse, what the constraints of these inputs are and how to verify them. Since we cannot perform garbage collection easily, we need to express the model of the program, i..e., auth service and AIR script, on the verifier side. + +This makes decomposition a pain: why decouple services if we need them to know so much about each other? That's why function calls in Aquamarine depend on the direct inputs, and direct inputs only. + +**References** + +* [Tetraplet implementation in the Aquamarine interpreter](https://github.com/fluencelabs/aquamarine/blob/master/crates/polyplets/src/tetraplet.rs) +* [Example of checking tetraplets for authorization in Fluent Pad](https://github.com/fluencelabs/fluent-pad/blob/main/services/history-inmemory/src/service\_api.rs#L91) +* [Getting tetraplets with Rust SDK](https://github.com/fluencelabs/marine-rs-sdk/blob/7c8f65fb64e64ba7e068b124449e745ef28c742d/sdk/src/call\_parameters.rs#L35) diff --git a/knowledge_tools.md b/knowledge_tools.md new file mode 100644 index 0000000..dd1883e --- /dev/null +++ b/knowledge_tools.md @@ -0,0 +1,13 @@ +# Tools + +## Aqua Command Line Tool + +Please see the [Aqua CLI](https://doc.fluence.dev/aqua-book/aqua-cli) documentation. + +## Fluence JS + +The [Fluence JS](https://github.com/fluencelabs/fluence-js) supports developers to build full-fledged applications for a variety of targets ranging from browsers to backend apps and greatly expands on the `cli` capabilities with respect to creating a local client peer. + +## Marine Tools + +Marine offers multiple tools including the Marine CLI, REPL and SDK. Please see the [Marine section](knowledge\_aquamarine/marine/) for more detail. diff --git a/node.md b/node.md new file mode 100644 index 0000000..d10702f --- /dev/null +++ b/node.md @@ -0,0 +1,36 @@ +# Node + +The Fluence protocol is implemented as the Fluence [reference node](https://github.com/fluencelabs/fluence) which includes the + +* Peer-to-peer communication layer +* Marine interpreter +* Aqua VM +* Builtin services + +and more. + +Builtin services are available on every Fluence peer and can be programmatically accessed and composed using Aqua just like any other service. For a complete list of builtin services see the `builtin.aqua` file in the [Aqua Lib](https://github.com/fluencelabs/aqua-lib) repo. + +To find out how to create your own builtin service, see the [Add Your Own Builtins](tutorials\_tutorials/add-your-own-builtin.md) tutorial. + +## Node distribution + +All infrastructure-related information is kept in [fluencelabs/node-distro](https://github.com/fluencelabs/node-distro) repository on GitHub. + +Node is distributed as a docker container [fluencelabs/fluence](https://hub.docker.com/r/fluencelabs/fluence). Version information can be found on the [releases page](https://github.com/fluencelabs/node-distro/releases) in GitHub. + +It comes with IPFS, AquaDHT and TrustGraph bundled. + +### How to run a node + +Just a simple docker run: + +``` +docker run --rm -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 fluencelabs/fluence +``` + +Or take a look at the [docker-compose.yml](https://github.com/fluencelabs/node-distro/blob/main/docker-compose.yml) in the node-distro repository. It starts node with a web dashboard to explore deployed services. + +``` +docker compose up -d +``` diff --git a/p2p.md b/p2p.md new file mode 100644 index 0000000..f74b050 --- /dev/null +++ b/p2p.md @@ -0,0 +1,73 @@ +# Thinking In Aquamarine + +Permissionless peer-to-peer networks have a lot to offer to developers and solution architects such as decentralization, improved control over data, flattened request-response data models and zero trust security at the application and service level. Of course, these capabilities and benefits don't just arise from putting [libp2p](https://libp2p.io) to work. Instead, a peer-to-peer overlay is required. The Fluence protocol provides such an overlay enabling a powerful distributed data routing and management protocol allowing developers to implement modern and secure Web3 solutions. See Figure 1 for a stylized representation decentralized applications development by programming the composition of services distributed across a peer-to-peer network. + +Figure 1: Decentralized Applications Composed From Distributed Services On P2P Nodes + +![](https://i.imgur.com/XxC7NN3.png) + +## Aquamarine + +As a complement to the protocol, Fluence provides the open source Aquamarine stack aimed at enabling developers to build high-quality, high-performance decentralized applications. Specifically, Aquamarine is purpose-built to ease the design and programming demands commonly encountered in distributed, and especially peer-to-peer, development. The Aquamarine stack is comprised of Aqua and Marine. + +[Aqua](https://doc.fluence.dev/aqua-book/), is a new generation programming language allowing developers to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into decentralized applications and backends. [Marine](https://github.com/fluencelabs/marine), on the other hand, provides the necessary Wasm runtime environment on peers to facilitate the execution of compiled Aqua code. + +A major contribution of Aquamarine is that network and application layer, i.e., [Layer 3 and Layer 7](https://en.wikipedia.org/wiki/OSI\_model), programming is accessible to developers as a seamless and ergonomic composition-from-services experience in Aqua, thereby greatly reducing, if not eliminating, the high barriers to entry when it comes to the design and development of distributed and decentralized applications. + +## **Improved Request-Response Model** + +In some network models, such as client server, the request-response model generally entails a response returning to the request client. For example, a client application tasked to conduct a credit check of a customer and to inform them with a SMS typically would call a credit check API, consume the response, and then call a SMS API to send the necessary SMS. + +Figure 2: Client Server Request Response Model + +![](<.gitbook/assets/image (9).png>) + +The Fluence peer-to-peer protocol, on the other hand, allows for a much more effective Request-Response processing pattern where responses are forward-chained to the next consuming service(s) without having to make the return trip to the client. See Figure 3. + +Figure 3: Fluence P2P Protocol Request Response Model + +![](<.gitbook/assets/image (11).png>) + +In a Fluence p2p implementation, our client application would call a credit check API deployed or proxy-ed on some peer and then send the response directly to the SMS API service possibly deployed on another peer -- similar to the flow depicted in Figure 1. + +Such a significantly flattened request-response model leads to much lower resource requirements for applications in terms of bandwidth and processing capacity thereby enabling a vast class of "thin" clients ranging from browsers to IoT and edge devices truly enabling decentralized machine-to-machine communication. + +## **Zero Trust Security** + +The [zero trust security model](https://en.wikipedia.org/wiki/Zero\_trust\_security\_model) assumes the worst, i.e., a breach, at all times and proposes a "never trust, always verify" approach. This approach is inherent in the Fluence peer-to-peer protocol and Aqua programming model as every service request can be authenticated at the service API level. That is, every service exposes functions which may require authentication and authorization. Aquamarine implements SecurityTetraplets as verifiable origins of the function arguments to enable fine-grained authorization. + +## Service Granularity And Redundancy + +Services are not capable to accept more than one request at any given time. Consider a service, FooBar, comprised of two functions, foo() and bar() where foo is a longer running function. + +``` +-- Stylized FooBar service with two functions +-- foo() and bar() +-- foo is long-running +--- if foo is called before bar, bar is blocked +service FooBar("service-id"): + bar() -> string + foo() -> string --< long running function + +func foobar(node:string, service_id:string, func_name:string) -> string: + res: *string + on node: + BlockedService service_id + if func_name == "foo": + res <- BlockedService.foo() + else: + res <- BlockedService.bar() + <- res! +``` + +As long as foo() is running, the entire FooBar service, including bar(), is blocked. This has implications with respect to both service granularity and redundancy, where service granularity captures to number of functions per service and redundancy refers to the number of service instances deployed to different peers. + +## Summary + +Programming distributed applications on the Fluence protocol with Aquamarine unlocks significant benefits from peer-to-peer networks while greatly easing the design and development processes. Nevertheless, a mental shift concerning peer-to-peer solution design and development process is required. Specifically, the successful mindset accommodates + +* an application architecture based on the composition of distributed services across peer-to-peer networks by decoupling business logic from application workflow +* a services-first approach with respect to both the network and application layer allowing a unified network and application programming model encapsulated by Aqua +* a multi-layer security approach enabling zero-trust models at the service level +* a flattened request-response model enabling data free from centralized control +* a services architecture with respect to granularity and redundancy influenced by service function runtime diff --git a/quick-start/1.-browser-to-browser-1.md b/quick-start/1.-browser-to-browser-1.md new file mode 100644 index 0000000..93a7234 --- /dev/null +++ b/quick-start/1.-browser-to-browser-1.md @@ -0,0 +1,62 @@ +# 1. Browser-to-Browser + +The first example demonstrates how to communicate between two client peers, i.e. browsers, with local services. The project is based on a create-react-app template with slight modifications to integrate Fluence. The primary focus is the integration itself and React could be swapped with any framework of your choice. + +Make sure you are in the `examples/quickstart/1-browser-to-browser` directory to install the dependencies: + +``` +cd examples/quickstart/1-browser-to-browser +npm install +``` + +Run the app with `npm start` : + +``` +npm run compile-aqua +npm start +``` + +Which opens a new tab in your browser at `http://localhost:3000`. The browser tab, representing the client peer, wants you to pick a relay node it, i.e., the browser client, can connect to and, of course, allows the peer to respond to the browser client. Select any one of the offered relays: + +![Relay Selection](<../.gitbook/assets/image (17).png>) + + + +The client peer is now connected to the relay and ready for business: + +![Connection confirmation to network](<../.gitbook/assets/image (18).png>) + +Let's follow the instructions, open another browser tab, i.e. client peer, using `http://localhost:3000` , select any one of the relays and copying the ensuing peer id and relay peer id to the first client peer, i.e. the first browser tab, and click the `say hello` button:\ + + +![Peer-to-peer communication between two browser client peers](<../.gitbook/assets/image (20).png>) + +Congratulations, you just sent messages between two browsers over the Fluence peer-to-peer network, which is pretty cool! Even cooler, however, is how we got here using Aqua, Fluence's distributed network and application composition language. + +Navigate to the `aqua` directory and open the \``getting-started.aqua` file in your IDE or terminal: + +![getting-started.aqua](<../.gitbook/assets/image (51).png>) + +And yes, fewer than ten lines (!) are required for a client peer, like our browser, to connect to the network and start composing the local `HelloPeer` service to send messages. + +In broad strokes, the Aqua code breaks down as follows: + +* Import the Aqua [standard library](https://github.com/fluencelabs/aqua-lib) into our application (1) +* Create a service interface binding to the local service (see below) with the `HelloPeer` namespace and `hello` function (4-5) +* Create the composition function `sayHello` that executes the `hello` call on the provided `targetPeerId` via the provided `targetRelayPeerId` and returns the result (7-10). Recall the copy and paste job you did earlier in the browser tab for the peer and relay id? Well, you just found the consumption place for these two parameters. + +Not only is Aqua rather succinct in allowing you to seamlessly program both network routes and distributed application workflows but also provides the ability to compile Aqua to Typescript stubs wrapping compiled Aqua, called AIR -- short for Aqua Intermediate Representation, into ready to use code blocks. Navigate to the `src/_aqua` directory and open the `getting-started.ts` file and poke around a bit. + +Note that the `src/App.tsx` file relies on the generated \`getting-started.ts\` file (line 7): + +![App.tsx](<../.gitbook/assets/image (43).png>) + +We wrote a little more than a handful of lines of code in Aqua and ended up with a deployment-ready code block that includes both the network routing and a compute logic to facilitate browser-to-browser messaging over a peer-to-peer network. + +The local (browser) service `HelloPeer` is also implemented in the `App.tsx` file: + +![Local HelloPeer service implementation](<../.gitbook/assets/image (22).png>) + +To summarize, we run an app that facilities messaging between two browsers over a peer-to-peer network. At the core of this capability is Aqua which allowed us in just a few lines of code to program both the network topology and the application workflow in barely more than a handful of lines of code. Hint: You should be excited. For more information on Aqua, see the [Aqua Book](https://app.gitbook.com/@fluence/s/aqua-book/). + +In the next section, we develop a WebAssembly module and deploy it as a hosted service to the Fluence peer-to-peer network. diff --git a/quick-start/2.-hosted-services.md b/quick-start/2.-hosted-services.md new file mode 100644 index 0000000..b30e5fd --- /dev/null +++ b/quick-start/2.-hosted-services.md @@ -0,0 +1,205 @@ +# 2. Hosted Services + +In the previous example, we used a local, browser-native service to facilitate the string generation and communication with another browser. The real power of the Fluence solution, however, is that services can be hosted on one or more nodes, easily reused and composed into decentralized applications with Aqua. + +{% hint style="info" %} +In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples): + +```bash +git clone https://github.com/fluencelabs/examples +``` +{% endhint %} + +### Creating A WebAssembly Module + +In this section, we develop a simple `HelloWorld` service and host it on a peer-to-peer node of the Fluence testnet. In your IDE or terminal, change to the `2-hosted-services` directory and open the `src/main.rs` file: + +![Rust code for HelloWorld hosted service module](<../.gitbook/assets/image (48).png>) + +Fluence hosted services are comprised of WebAssembly modules implemented in Rust and compiled to [wasm32-wasi](https://doc.rust-lang.org/stable/nightly-rustc/rustc\_target/spec/wasm32\_wasi/index.html). Let's have look at our code: + +```rust +// quickstart/2-hosted-services/src/main.rs +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +module_manifest!(); + +pub fn main() {} + +#[marine] +pub struct HelloWorld { + pub msg: String, + pub reply: String, +} + +#[marine] +pub fn hello(from: String) -> HelloWorld { + HelloWorld { + msg: format!("Hello from: \n{}", from), + reply: format!("Hello back to you, \n{}", from), + } +} +``` + +At the core of our implementation is the `hello` function which takes a string parameter and returns the `HelloWorld` struct consisting of the `msg` and `reply` field, respectively. We can use the `build.sh` script in the `scripts` directory, `./scripts/build.sh` , to compile the code to the wasi32-wasm target from the VSCode terminal: + +![](<../.gitbook/assets/image (47).png>) + +In addition to some housekeeping, the `build.sh` script gives the compile instructions with [marine](https://crates.io/crates/marine), `marine build --release` , and copies the resulting Wasm module, `hello_world.wasm`, to the `artifacts` directory for easy access. + +### Testing And Exploring Wasm Code + +So far, so good. Of course, we want to test our code and we have a couple of test functions in our `main.rs` file: + +```rust +// quickstart/2-hosted-services/src/main.rs +use marine_rs_sdk::marine; +use marine_rs_sdk::module_manifest; + +// + +#[cfg(test)] +mod tests { + use marine_rs_sdk_test::marine_test; + + #[marine_test(config_path = "../configs/Config.toml", modules_dir = "../artifacts")] + fn non_empty_string(hello_world: marine_test_env::hello_world::ModuleInterface) { + let actual = hello_world.hello("SuperNode".to_string()); + assert_eq!(actual.msg, "Hello from: \nSuperNode".to_string()); + } + + #[marine_test(config_path = "../configs/Config.toml", modules_dir = "../artifacts")] + fn empty_string(hello_world: marine_test_env::hello_world::ModuleInterface) { + let actual = hello_world.hello("".to_string()); + assert_eq!(actual.msg, "Hello from: \n"); + } +} + +``` + +\ + To run our tests, we can use the familiar [`cargo test`](https://doc.rust-lang.org/cargo/commands/cargo-test.html) . However, we don't really care all that much about our native Rust functions being tested but want to test our WebAssembly functions. This is where the extra code in the test module comes into play. In short., we are running `cargo test` against the exposed interfaces of the `hello_world.wasm` module and in order to do that, we need the `marine_test` macro and provide it with both the modules directory, i.e., the `artifacts` directory, and the location of the `Config.toml` file. Note that the `Config.toml` file specifies the module metadata and optional module linking data. Moreover, we need to call our Wasm functions from the module namespace, i.e. `hello_world.hello` instead of the standard `hello` -- see lines 13 and 19 above, which we specify as an argument in the test function signature (lines 11 and 17, respectively). + +{% hint style="info" %} +In order to able able to use the macro, install the [`marine-rs-sdk-test`](https://crates.io/crates/marine-rs-sdk-test) crate as a dev dependency: + +`[dev-dependencies] marine-rs-sdk-test = "`\`"` +{% endhint %} + +From the IDE or terminal, we now run our tests with the`cargo +nightly test --release` command. Please note that if `nightly` is your default, you don't need it in your `cargo test` command. + +![](<../.gitbook/assets/image (46).png>) + +Well done -- our tests check out. Before we deploy our service to the network, we can interact with it locally using the [Marine REPL](https://crates.io/crates/mrepl). In your VSCode terminal the `2-hosted-services` directory run: + +``` +mrepl configs/Config.toml +``` + +which puts us in the REPL: + +```bash +mrepl configs/Config.toml +Welcome to the Marine REPL (version 0.9.1) +Minimal supported versions + sdk: 0.6.0 + interface-types: 0.20.0 + +app service was created with service id = 8a2d946d-b474-468c-8c56-9e970ee64743 +elapsed time 53.593404ms + +1> i +Loaded modules interface: +data HelloWorld: + msg: string + reply: string + +hello_world: + fn hello(from: string) -> HelloWorld + +2> call hello_world hello ["Fluence"] +result: Object({"msg": String("Hello from: \nFluence"), "reply": String("Hello back to you, \nFluence")}) + elapsed time: 278.5µs + +3> +``` + +We can explore the available interfaces with the `i` command and see that the interfaces we marked with the `marine` macro in our Rust code above are indeed exposed and available for consumption. Using the `call` command, still in the REPL, we can access any available function in the module namespace, e.g., `call hello_word hello []`. You can exit the REPL with the `ctrl-c` command. + +### Exporting WebAssembly Interfaces To Aqua + +In anticipation of future needs, note that `marine` allows us to export the Wasm interfaces ready for use in Aqua. In your VSCode terminal, navigate to the \`2-hosted-services\` directory + +``` +marine aqua artifacts/hello_world.wasm +``` + +Which gives us the Aqua-ready interfaces: + +```haskell +data HelloWorld: + msg: string + reply: string + +service HelloWorld: + hello(from: string) -> HelloWorld +``` + +That can be piped directly into an aqua file , e.g., \``marine aqua my_wasm.wasm >> my_aqua.aqua`. + +### Deploying A Wasm Module To The Network + +Looks like all is in order with our module and we are ready to deploy our `HelloWorld` service to the world by means of the Fluence peer-to-peer network. For this to happen, we need two things: the peer id of our target node(s) and a way to deploy the service. The latter can be accomplished with the `aqua` command line tool and with respect to the former, we can get a peer from one of the Fluence testnets with `aqua` . In your VSCode terminal: + +``` +aqua config default_peers +``` + +Which gets us a list of network peers: + +``` +/dns4/kras-00.fluence.dev/tcp/19990/wss/p2p/12D3KooWSD5PToNiLQwKDXsu8JSysCwUt8BVUJEqCHcDe7P5h45e +/dns4/kras-00.fluence.dev/tcp/19001/wss/p2p/12D3KooWR4cv1a8tv7pps4HH6wePNaK6gf1Hww5wcCMzeWxyNw51 +/dns4/kras-01.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA +/dns4/kras-02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf +/dns4/kras-03.fluence.dev/tcp/19001/wss/p2p/12D3KooWJd3HaMJ1rpLY1kQvcjRPEvnDwcXrH8mJvk7ypcZXqXGE +/dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi +/dns4/kras-05.fluence.dev/tcp/19001/wss/p2p/12D3KooWCMr9mU894i8JXAFqpgoFtx6qnV1LFPSfVc3Y34N4h4LS +/dns4/kras-06.fluence.dev/tcp/19001/wss/p2p/12D3KooWDUszU2NeWyUVjCXhGEt1MoZrhvdmaQQwtZUriuGN1jTr +/dns4/kras-07.fluence.dev/tcp/19001/wss/p2p/12D3KooWEFFCZnar1cUJQ3rMWjvPQg6yMV2aXWs2DkJNSRbduBWn +/dns4/kras-08.fluence.dev/tcp/19001/wss/p2p/12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt +/dns4/kras-09.fluence.dev/tcp/19001/wss/p2p/12D3KooWD7CvsYcpF9HE9CCV9aY3SJ317tkXVykjtZnht2EbzDPm +``` + +Let's use the peer`12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi` as our deployment target and deploy our service from the VSCode terminal. In the `quickstart/2-hosted-services` directory run: + +```bash +aqua remote deploy \ + --addr /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ + --config-path configs/hello_world_deployment_cfg.json \ + --service hello-world +``` + +Which gives us a unique service id: + +``` +Your peerId: 12D3KooWAnbFkXk3UFm2MyuNGsSQ6uXHAtjizRC2xv9Q6avN3JBx +"Going to upload a module..." +2022.02.12 00:03:48 [INFO] created ipfs client to /ip4/164.90.164.229/tcp/5001 +2022.02.12 00:03:48 [INFO] connected to ipfs +2022.02.12 00:03:50 [INFO] file uploaded +"Now time to make a blueprint..." +"Blueprint id:" +"5efb45e9442ae681d35dcfd4ab40a9927d47b5e16d380d02f71536ba2a2ee427" +"And your service id is:" +"09d9a052-8ccd-4627-9b3a-b72fe6571c87" +``` + +Take note of the service id, 09d9a052-8ccd-4627-9b3a-b72fe6571c87 in this example but different for you, as we need it to use the service with Aqua. + +Congratulations, we just deployed our first reusable service to the Fluence network and we can admire our handiwork on the Fluence [Developer Hub](https://dash.fluence.dev): + +![HelloWorld service deployed to peer 12D3Koo...WaoHi](<../.gitbook/assets/image (36).png>) + +With our newly created service ready to roll, let's move on and put it to work. diff --git a/quick-start/3.-browser-to-service.md b/quick-start/3.-browser-to-service.md new file mode 100644 index 0000000..6c9f685 --- /dev/null +++ b/quick-start/3.-browser-to-service.md @@ -0,0 +1,45 @@ +# 3. Browser-to-Service + +In the first section, we explored browser-to-browser messaging using local, i.e. browser-native, services and the Fluence network for message transport. In the second section, we developed a `HelloWorld` Wasm module and deployed it as a hosted service on the testnet peer `12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi` with service id `1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50` . We can now extend our browser-to-browser messaging application with our hosted service. + +Let's navigate to the `3-browser-to-service` directory in the VSCode terminal and install the dependencies: + +``` +npm install +``` + +And run the application with: + +``` +npm run compile-aqua +npm start +``` + +Which will open a new browser tab at `http://localhost:3000` . Following the instructions, we connect to any one of the displayed relay ids, open another browser tab also at `http://localhost:3000`, select a relay and copy and paste the client peer id and relay id into corresponding fields in the first tab and press the `say hello` button. + +![Browser To Service Implementation](<../.gitbook/assets/image (38) (2) (2) (2) (1).png>) + +The result looks familiar, so what's different? Let's have a look at the Aqua file. Navigate to the `aqua/getting_started.aqua` file in your IDE or terminal: + +![getting-started.aqua](<../.gitbook/assets/image (50).png>) + +And let's work it from the top: + +* Import the Aqua standard library (1) +* Provide the hosted service peer id (3) and service id (4) +* Specify the `HelloWorld` struct interface binding (6-8) for the hosted service from the `marine aqua` export +* Specify the `HelloWorld` interface and function binding (11-12) for the hosted service from the `marine aqua` export +* Specify the `HelloPeer` interface and function binding (15-16) for the local service +* Create the Aqua workflow function `sayHello` (18-29) + +Before we dive into the `sayHello` function, let's look at why we still need a local service even though we deployed a hosted service. The reason for that lies in the need for the browser client to be able to consume the message sent from the other browser through the relay peer. With that out of the way, let's dig in: + +* The function signature (18) takes two arguments: `targetPeerId`, which is the client peer id of the other browser and the `targetelayPeerId`, which is the relay id -- both parameters are the values you copy and pasted from the second browser tab into the first browser tab +* The first step is to call on the hosted service `HelloWorld` on the host peer `helloWorldPeerId` , which we specified in line 1 + * We bind the `HelloWorld` interface, on the peer `helloWorldPeerId`, i.e.,`12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi`, to the service id of the hosted service `helloWorldServiceId` , i.e. `1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50`, which takes the %init\_\_peer\_\_id% parameter, i.e., the peer id of the peer that initiated the request, and pushes the result into `comp` (20-22) + * We now want to send a result back to the target browser (peer) (25-26) using the local service via the `targetRelayPeerId` in the background as a `co` routine. + * Finally, we send the `comp` result to the initiating browser + +A little more involved than our first example but we are again getting a lot done with very little code. Of course, there could be more than one hosted service in play and we could implement, for example, hosted spell checking, text formatting and so much more without much extra effort to express additional workflow logic in our Aqua script. + +This brings us to the end of this quick start tutorial. We hope you are as excited as we are to put Aqua and the Fluence stack to work. To continue your Fluence journey, have a look at the remainder of this book, take a deep dive into Aqua with the [Aqua book](https://doc.fluence.dev/aqua-book/) or dig into Marine and Aqua examples in the [repo](https://github.com/fluencelabs/examples). diff --git a/quick-start/4.-service-composition-and-reuse-with-aqua.md b/quick-start/4.-service-composition-and-reuse-with-aqua.md new file mode 100644 index 0000000..366ba67 --- /dev/null +++ b/quick-start/4.-service-composition-and-reuse-with-aqua.md @@ -0,0 +1,249 @@ +# 4. Service Composition And Reuse With Aqua + +In the previous three sections, you got a taste of using Aqua with browsers and how to create and deploy a service. In this section, we discuss how to compose an application from multiple distributed services using Aqua. In Fluence, we don't use JSON-RPC or REST endpoints to address and execute the service, we use [Aqua](https://github.com/fluencelabs/aqua). + +Recall, Aqua is a purpose-built distributed systems and peer-to-peer programming language that resolves (Peer Id, Service Id) tuples to facilitate service execution on the host node without developers having to worry about transport or network routing. And with Aqua VM available on each Fluence peer-to-peer node, Aqua allows developers to ergonomically locate and execute distributed services. + +{% hint style="info" %} +In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples): + +```bash +git clone https://github.com/fluencelabs/examples +``` +{% endhint %} + +### Composition With Aqua + +A service is one or more linked WebAssembly (Wasm) modules that may be linked at runtime. Said dependencies are specified by a **blueprint** which is the basis for creating a unique service id after the deployment and initiation of the blueprint on our chosen host for deployment. See Figure 1. + +![](<../.gitbook/assets/image (41).png>) + +When we deploy our service, as demonstrated in section two, the service is "out there" on the network and we need a way to locate and execute the service if w want to utilize he service as part of our application. + +Luckily, the (Peer Id, Service Id) tuple we obtain from the service deployment process contains all the information Aqua needs to locate and execute the specified service instance. + +Let's create a Wasm module with a single function that adds one to an input in the `adder` directory: + +```rust +#[marine] +fn add_one(input: u64) -> u64 { + input + 1 +} +``` + +For our purposes, we deploy that module as a service to three hosts: Peer 1, Peer 2, and Peer 3. Use the instructions provided in section two to create the module and deploy the service to three peers of your choosing. See `4-composing-services-with-aqua/adder` for the code and `data/distributed_service.json` for the (Peer Id, Service Id) tuples already deployed to three network peers. + +Once we got the services deployed to their respective hosts, we can use Aqua to compose an admittedly simple application by composing the use of each service into an workflow where the (Peer Id, Service Id) tuples facilitate the routing to and execution of each service. Also, recall that in the Fluence peer-to-peer programming model the client need not, and for the most part should not, be involved in managing intermediate results. Instead, results are "forward chained" to the next service as specified in the Aqua workflow. + +Using our `add_one` service and starting with an input parameter value of one, utilizing all three services, we expect a final result of four given **seq**uential service execution: + +![](<../.gitbook/assets/image (42).png>) + +The underlying Aqua script may look something like this (see the `aqua-script` directory): + +``` +-- aqua-scripts/adder.aqua + +-- service interface for Wasm module +service AddOne: + add_one: u64 -> u64 + +-- convenience struct for (Peer Id, Service Id) tuples +data NodeServiceTuple: + node_id: string + service_id: string + +func add_one_three_times(value: u64, ns_tuples: []NodeServiceTuple) -> u64: + on ns_tuples!0.node_id: + AddOne ns_tuples!0.service_id + res1 <- AddOne.add_one(value) + + on ns_tuples!1.node_id: + AddOne ns_tuples!1.service_id + res2 <- AddOne.add_one(res1) + + on ns_tuples!2.node_id: + AddOne ns_tuples!2.service_id + res3 <- AddOne.add_one(res2) + <- res3 +``` + +Let's give it a whirl! Using the already deployed services or your even better, your own deployed services, let's compile out Aqua script in the `4-composing-services-with-aqua` directory. We use `aqua run` to execute the above Aqua script: + +``` +aqua run \ +-i aqua-scripts \ +-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ +-f 'add_one_three_times(5, arg)' \ +-d '{"arg":[{ + "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", + "service_id": "7b2ab89f-0897-4537-b726-8120b405074d" + }, + { + "node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA", + "service_id": "e013f18a-200f-4249-8303-d42d10d3ce46" + }, + { + "node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", + "service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e" + }] + }' +``` + +Since we are starting with a value of 5 and increment it three times, we expect an 8 which we get: + +``` +Your peerId: 12D3KooWHgS2T8mWoAkxoEaLtPjHauai2mVPrNSLDKZVd71KoxS1 +8 +``` + +Of course, we can drastically change our application logic by changing the execution flow of our workflow composition. In the above example, we executed each of the three services once in sequence. Alternatively, we could also execute them in parallel or some combination of sequential and parallel execution arms. + +Reusing our deployed services with a different execution flow may look like the following: + +```` +```aqua + +-- service interface for Wasm module +service AddOne: + add_one: u64 -> u64 + +-- convenience struc for (Peer Id, Service Id) tuples +data NodeServiceTuple: + node_id: string + service_id: string + +-- our app as defined by the worflow expressed in Aqua +func add_one_par(value: u64, ns_tuples: []NodeServiceTuple) -> []u64: + res: *u64 + for ns <- ns_tuples par: + on ns.node_id: + AddOne ns.service_id + res <- AddOne.add_one(value) + Op.noop() + join res[2] --< flatten the stream variable + <- res --< return the final results [value +1, value + 1, value + 1, ...] to the client +```` + +Unlike the sequential execution model, this example returns an array where each item is the incremented value, which is captured by the stream variable **res**. That is, for a starting value of five (5), we obtain \[6,6,6] assuming our NodeServiceTuple array provided the three distinct (Peer Id, Service Id) tuples. + +Running the script with aqua: + +``` +aqua run \ +-i aqua-scripts \ +-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ +-f 'add_one_par(5, arg)' \ +-d '{"arg":[{ + "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", + "service_id": "7b2ab89f-0897-4537-b726-8120b405074d" + }, + { + "node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA", + "service_id": "e013f18a-200f-4249-8303-d42d10d3ce46" + }, + { + "node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", + "service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e" + }] + }' +``` + +We get the expected result: + +``` +Your peerId: 12D3KooWB4eHpj2VfPDW9hJ5uMQiccV27uSJyHWiMUGN2hqkefV8 + waiting for an argument with idx '2' on stream with size '0' + waiting for an argument with idx '2' on stream with size '0' + waiting for an argument with idx '2' on stream with size '1' + waiting for an argument with idx '2' on stream with size '1' +[ + 6, + 6, + 6 +] +``` + +We can improve on our business logic and change our input arguments to make parallelization a little more useful. Let's extend our data struct and update the workflow: + +``` +-- aqua-scripts/adder.aqua + +data ValueNodeService: + node_id: string + service_id: string + value: u64 --< add value + +func add_one_par_alt(payload: []ValueNodeService) -> []u64: + res: *u64 + for vns <- payload par: --< parallelized run + on vns.node_id: + AddOne vns.service_id + res <- AddOne.add_one(vns.value) + Op.noop() + join res[2] + <- res +``` + +And we can run the `aqua run` command: + +``` +aqua run \ +-i aqua-scripts \ +-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ +-f 'add_one_par_alt(arg)' \ +-d '{"arg":[{ + "value": 5, + "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", + "service_id": "7b2ab89f-0897-4537-b726-8120b405074d" + }, + { + "value": 10, + "node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA", + "service_id": "e013f18a-200f-4249-8303-d42d10d3ce46" + }, + { + "value": 15, + "node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", + "service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e" + }] + }' +``` + +Given our input values \[5, 10, 15], we get the expected output array of \[6, 11, 16]: + +``` +Your peerId: 12D3KooWNHJkYtevGk5ccZFyHyfinTJYNDJZ4C9KN9cJGEqaWVe9 + waiting for an argument with idx '2' on stream with size '0' + waiting for an argument with idx '2' on stream with size '0' + waiting for an argument with idx '2' on stream with size '1' + waiting for an argument with idx '2' on stream with size '1' +[ + 6, + 11, + 16 +] +``` + +Alternatively, we can run our Aqua scripts with a Typescript client. In the `client-peer` directory: + +``` +npm i +npm start +``` + +Which of course gives us the expected results: + +``` +created a Fluence client 12D3KooWGve35kvMQ8USbmtRoMCzxaBPXSbqsZxfo6T8gBAV6bzy with relay 12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA +add_one to 5 equals 6 +add_one sequentially equals 8 +add_one parallel equals [ 6, 6, 6 ] +add_one parallel alt equals [ 11, 6, 16 ] --< order may differ for you +``` + +### Summary + +This section illustrates how Aqua allows developers to locate and execute distributed services on by merely providing a (Peer Id, Service Id) tuple and the associated data. From an Aqua user perspective, there are no JSON-RPC or REST endpoints just topology tuples that are resolved on peers of the network. Moreover, we saw how the Fluence peer-to-peer workflow model facilitates a different request-response model than commonly encountered in traditional client-server applications. That is, instead of returning each service result to the client, Aqua allows us to forward the (intermittent) result to the next service, peer-to-peer style. + +Furthermore, we explored how different Aqua execution flows, e.g. **seq**uential vs. **par**allel, and data models allow developers to compose drastically different workflows and application re-using already deployed services. For more information on Aqua, please see the [Aqua book](https://doc.fluence.dev/aqua-book/) and for more information on Fluence development, see the [developer docs](https://doc.fluence.dev/docs/). diff --git a/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md b/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md new file mode 100644 index 0000000..c6ba92d --- /dev/null +++ b/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md @@ -0,0 +1,361 @@ +# 5. Decentralized Oracles With Fluence And Aqua + +### Overview + +An oracle is some device that provides real-world, off-chain data to deterministic on-chain consumers such as a smart contract. A decentralized oracle draws from multiple, purportedly (roughly) equal input sources to minimize or even eliminate single source pitfalls such as [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle\_attack)(MITM) or provider manipulation. For example, a decentralized price oracle for, say, ETH/USD, could poll several DEXs for ETH/USD prices. Since smart contracts, especially those deployed on EVMs, can't directly call off-chain resources, oracles play a critical "middleware" role in the decentralized, trustless ecosystem. See Figure 1. + +![](<../.gitbook/assets/image (44).png>) + +Unlike single source oracles, multi-source oracles require some consensus mechanism to convert multiple input sources over the same target parameter into reliable point or range data suitable for third party, e.g., smart contract, consumption. Such "consensus over inputs" may take the form of simple [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics), e.g., mean, or one of many [other methods](https://en.wikipedia.org/wiki/Consensus\_\(computer\_science\)). + +Given the importance of oracles to the Web3 ecosystem, it's not surprising to see a variety of third party solutions supporting various blockchain protocols. Fluence does not provide an oracle solution _per se_ but provides a peer-to-peer platform, tools and components for developers to quickly and easily program and compose reusable distributed data acquisition, processing and delivery services into decentralized oracle applications. + +For the remainder of this section, we work through the process of developing a decentralized, multi-source timestamp oracle comprised of data acquisition, processing and delivery. + +### Creating A Decentralized Timestamp Oracle + +Time, often in form of timestamps, plays a critical role in a large number of Web2 and Web3 applications including off-chain voting applications and on-chain clocks. Our goal is to provide a consensus timestamp sourced from multiple input sources and implement an acceptable input aggregation and processing service to arrive at either a timestamp point or range value(s). + +{% hint style="info" %} +In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples): + +```bash +git clone https://github.com/fluencelabs/examples +``` +{% endhint %} + +#### Timestamp Acquisition + +Each Fluence peer, i.e. node in the Fluence peer-to-peer network, has the ability to provide a timestamp from a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L127). In Aqua, we can call a [timestamp function](https://github.com/fluencelabs/fluence/blob/527e26e08f3905e53208b575792712eeaee5deca/particle-closures/src/host\_closures.rs#L124) with the desired granularity, i.e., seconds or milliseconds for further processing: + +```python + -- aqua timestamp sourcing + on peer: + ts_ms_result <- peer.timestamp_ms() + -- or + ts_sec_result <- peer.timestamp_sec() + -- ... +``` + +In order to decentralize our timestamp oracle, we want to poll multiple peers in the Fluence network: + +```python + -- multi-peer timestamp sourcing + -- ... + results: *u64 + for peer <- many_peers_list par: + on peer: + results <- peer.timestamp_ms() + -- ... +``` + +In the above example, we have a list of peers and retrieve a timestamp value from each one. Note that we are polling nodes for timestamps in [parallel](https://doc.fluence.dev/aqua-book/language/flow/parallel) in order to optimize toward uniformity and to collect responses in the stream variable `results`. See Figure 2. + +![](<../.gitbook/assets/image (45).png>) + +The last thing to pin down concerning our timestamp acquisition is which peers to query. One possibility is to specify the peer ids of a set of desired peers to query. Alternatively, we can tap into the [Kademlia neighborhood](https://en.wikipedia.org/wiki/Kademlia) of a peer, which is a set of peers that are closest to our peer based on the XOR distance of the peer ids. Luckily, there is a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L140) we can call from Aqua that returns up to 20 neighboring peers: + +```python + -- timestamps from Kademlia neighborhood + results: *u64 + on node: + k <- Op.string_to_b58(node) + nodes <- Kademlia.neighborhood(k, nil, nil) + for node <- nodes par: + on node: + try: + results <- node.timestamp_ms() + -- ... +``` + +#### Timestamp Processing + +Once we have our multiple timestamp values, we need to process them into a point or range value(s) to be useful. Whatever our processing/consensus algorithm is, we can implement it in Marine as one or more reusable, distributed services. + +Fpyor example, we can rely on [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics) and implement basic averaging to arrive at a point estimate: + +```rust + // ... + + #[marine] + pub fn ts_avg(timestamps: Vec) -> f64 { + timestamps.iter().sum::() as f64 / timestamps.len() as f64 +} + // ... +``` + +Using the average to arrive at a point-estimate is simply a stake in the ground to illustrate what's possible. Actual processing algorithms may vary and, depending on a developers target audience, different algorithms may be used for different delivery targets. And Aqua makes it easy to customize workflows while emphasizing reuse. + +#### Putting It All Together + +Let's put it all together by sourcing timestamps from the Kademlia neighborhood and processing the timestamps into a consensus value. Instead of one of the summary statistics, we employ a simple, consensus algorithm that randomly selects one of the provided timestamps and then calculates a consensus score from the remaining n -1 timestamps: + +```rust +// src.main.rs +// +// simple consensus from timestamps +// params: +// timestamps, u64, [0, u64_max] +// tolerance, u32, [0, u32_max] +// threshold, f64, [0.0, 1.0] +// 1. Remove a randomly selected timestamp from the array of timestamps, ts +// 2. Count the number of timestamps left in the array that are withn +/- tolerance (where tolerance may be zero) +// 3. compare the suporting number of times stamps divided by th enumber of remaining timestamps to the threshold. if >=, consensus for selected timestamp is true else false +// +[marine] +fn ts_frequency(mut timestamps: Vec, tolerance: u32, threshold: f64, err_value: u64) -> Consensus { + timestamps.retain(|&ts| ts != err_value); + if timestamps.len() == 0 { + return Consensus { + err_str: "Array must have at least one element".to_string(), + ..<_>::default() + }; + } + + if timestamps.len() == 1 { + return Consensus { + n: 1, + consensus_ts: timestamps[0], + consensus: true, + support: 1, + ..<_>::default() + }; + } + + if threshold < 0f64 || threshold > 1f64 { + return Consensus { + err_str: "Threshold needs to be between [0.0,1.0]".to_string(), + ..<_>::default() + }; + } + + let rnd_seed: u64 = timestamps.iter().sum(); + let mut rng = WyRand::new_seed(rnd_seed); + let rnd_idx = rng.generate_range(0..timestamps.len()); + let consensus_ts = timestamps.swap_remove(rnd_idx); + let mut support: u32 = 0; + for ts in timestamps.iter() { + if ts <= &(consensus_ts + tolerance as u64) && ts >= &(consensus_ts - tolerance as u64) { + support += 1; + } + } + + let mut consensus = false; + if (support as f64 / timestamps.len() as f64) >= threshold { + consensus = true; + } + + Consensus { + n: timestamps.len() as u32, + consensus_ts, + consensus, + support, + err_str: "".to_string(), + } +} +``` + +We compile our consensus module with `./scripts/build.sh`, which allows us to run the unit tests using the Wasm module with `cargo +nightly test`: + +```bash +# src.main.rs +running 10 tests +test tests::ts_validation_good_consensus_false ... ok +test tests::test_err_val ... ok +test tests::test_mean_fail ... ok +test tests::ts_validation_good_consensus ... ok +test tests::ts_validation_bad_empty ... ok +test tests::ts_validation_good_consensus_true ... ok +test tests::ts_validation_good_no_support ... ok +test tests::test_mean_good ... ok +test tests::ts_validation_good_no_consensus ... ok +test tests::ts_validation_good_one ... ok + +test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 18.75s +``` + +We can now interact with our module with the Marine REPL `mrepl configs/Config.toml`: + +```python +Welcome to the Marine REPL (version 0.9.1) +Minimal supported versions + sdk: 0.6.0 + interface-types: 0.20.0 + +app service was created with service id = 520a092b-85ef-43c1-9c12-444274ba2cb7 +elapsed time 62.893047ms + +1> i +Loaded modules interface: +data Consensus: + n: u32 + reference_ts: u64 + support: u32 + err_str: string +data Oracle: + n: u32 + avg: f64 + err_str: string + +ts_oracle: + fn ts_avg(timestamps: []u64, min_points: u32) -> Oracle + fn ts_frequency(timestamps: []u64, tolerance: u32) -> Consensus + +2> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 0, 0.66, 0] +result: Object({"consensus": Bool(false), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(0)}) + elapsed time: 167.078µss + +3> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 5, 0.66, 0] +result: Object({"consensus": Bool(true), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(3)}) + elapsed time: 63.291µs +``` + +In our first call at prompt `2>`, we set a tolerance of 0 and, given our array of timestamps, have no support for the chosen timestamps, whereas in the next call, `3>,`we increase the tolerance parameter and obtain a consensus result. + +All looks satisfactory and we are ready to deploy our module with `./scripts/deploy.sh`, which write-appends the deployment response data, including the service id, to a local file named `deployed_service.data`: + +```bash +client seed: 7UNmJPMWdLmrwAtGrpJXNrrcK7tEZHCjvKbGdSzEizEr +client peerId: 12D3KooWBeEsUnMV9MZ6QGMfaxcTvw8mGFEMGDe7rhKN8RQv1Gs8 +relay peerId: 12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi +service id: 61a86f67-ffc2-4dea-8746-fd4f04d9c75b +service created successfully +``` + +With the service in place, let's have a look at our Aqua script. Recall, we want to poll the Kademlia neighborhood for timestamps and then call the `ts_oracle` method of our service with the array of timestamps and tolerance parameters as well as the (peer id, service id) parameters of our deployed service: + +```python +-- aqua/ts_oracle.aqua +-- + +func ts_oracle_with_consensus(tolerance: u32, threshold: f64, err_value:u64, node:string, oracle_service_id:string)-> Consensus, []string: + rtt = 1000 + res: *u64 -- 4 + msg = "timeout" + dead_peers: *string + on node: + k <- Op.string_to_b58(node) + nodes <- Kademlia.neighborhood(k, nil, nil) -- 1 + for n <- nodes par: -- 3 + status: *string + on n: -- 7 + res <- Peer.timestamp_ms() -- 2 + status <<- "success" -- 9 + par status <- Peer.timeout(rtt, msg) -- 8 + if status! != "success": + res <<- err_value -- 10 + dead_peers <<- n -- 11 + + MyOp.identity(res!19) -- 5 + TSOracle oracle_service_id + consensus <- TSOracle.ts_frequency(res, tolerance, threshold, err_value) -- 6 + <- consensus, dead_peers -- 12 +``` + +That script is probably a little more involved than what you've seen so far. So let's work through the script: In order to get out set of timestamps, we determine the Kademlia neighbors (1) and then proceed to request a timestamp from each of those peers (2) in parallel (3). In an ideal world, each peers responds with a timestamp and the stream variable `res` (4) fills up with the 20 values from the twenty neighbors, which we then fold (5) and push to our consensus service (6). Alas, life in distributed isn't quite that simple since there are no guarantees that a peer is actually available to connect or provide a service response. Since we may never actually connect to a peer (7), we can't expect an error response meaning that we get a silent fail at (2) and no write to the stream `res`. Subsequently, this leads to the fail of the fold operation (5) since fewer than the expected twenty items are in the stream and the operation (5) ends up timing out waiting for a never-to-arrive timestamp. + +In order to deal with this issue, we introduce a sleep operation (8) with the builtin [Peer.timeout](https://github.com/fluencelabs/aqua-lib/blob/1193236fe733e75ed0954ed26e1234ab7a6e7c53/builtin.aqua#L135) and run that in parallel to the attempted connection for peer `n` (3) essentially setting up a race condition to write to the stream: if the peer (`on n`, 7) behaves, we write the timestamp to `res`(2) and make a note of that successful operation (9); else, we write a dummy value, i.e., `err_value`, into the stream (10) and make a note of the delinquent peer (11). Recall, we filter out the dummy `err_value` at the service level. + +Once we get our consensus result (6), we return it as well as the array of unavailable peers (12). And that's all there is. + +In order to execute our workflow, we can use Aqua's `aqua run` CLI without having to manually compile the script: + +```bash +aqua run \ + -a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ + -i aqua/ts_oracle.aqua \ + -f 'ts_oracle_with_consensus(10, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")' +``` + +If you are new to `aqua run`, the CLI functionality provided by the [`aqua` package](https://www.npmjs.com/package/@fluencelabs/aqua): + +* `aqua run --help` for all your immediate needs +* the `-i` flag denotes the location of our reference aqua file +* the `-a` flag denotes the multi-address of our connection peer/relay +* the `-f` flag handles the meat of the call: + * specify the aqua function name + * provide the parameter values matching the function signature + +Upon execution of our `aqua run` client, we get the following result which may be drastically different for you: + +```bash +Your peerId: 12D3KooWEAhnNDjnh7C9Jba4Yn3EPcK6FJYRMZmaEQuqDnkb9UQf +[ + { + "consensus": true, + "consensus_ts": 1637883531844, + "err_str": "", + "n": 15, + "support": 14 + }, + [ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +] +``` + +Recall, that the maximum number of peers pulling from a Kademlia is 20 -- the default value set by the Fluence team. As discussed above, not all nodes may be available at any given time and at the _time of this writing_, the following four nodes were indeed not providing a timestamp response: + +```bash +[ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +``` + +That leaves us with a smaller timestamp pool to run through our consensus algorithm than anticipated. Please note that it is up to the consensus algorithm design(er) to set the minimum acceptable number of inputs deemed necessary to produce a sensible and acceptable result. In our case, we run fast and loose as evident in our service implementation discussed above, and go with what we get as long as we get at least one timestamp. + +With a tolerance of ten (10) milli-seconds and a consensus threshold of 2/3 (0.66), we indeed attain a consensus for the _1637883531844_ value with support from 14 out of 15 timestamps: + +```bash + { + "consensus": true, + "consensus_ts": 1637883531844, + "err_str": "", + "n": 15, + "support": 14 + }, +``` + +We can make adjustments to the _tolerance_ parameter: + +```bash +aqua run \ + -a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ + -i aqua/ts_oracle.aqua \ + -f 'ts_oracle_with_consensus(0, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")' +``` + +Which does _not_ result in a consensus timestamp given the same threshold value: + +```bash +Your peerId: 12D3KooWP7vAR462JgoagUzGA8s9YccQZ7wsuGigFof7sajiGThr +[ + { + "consensus": false, + "consensus_ts": 1637884102677, + "err_str": "", + "n": 15, + "support": 0 + }, + [ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +] +``` + +We encourage you to experiment and tweak the parameters both for the consensus algorithm and the timeout settings. Obviously, longer routes make for more timestamp variance even if each timestamp called is "true." + +### Summary + +Fluence and Aqua make it easy to create and implement decentralized oracle and consensus algorithms using Fluence's off-chain peer-to-peer network and tool set. + +To further your understanding of creating decentralized off-chain (compute) oracles with Fluence and Aqua, experiment with both the consensus methodology and, of course, the oracle sources. Instead of timestamps, try your hand on crypto price/pairs and associated liquidity data, election exit polls or sports scores. Enjoy! diff --git a/quick-start/README.md b/quick-start/README.md new file mode 100644 index 0000000..78c7f50 --- /dev/null +++ b/quick-start/README.md @@ -0,0 +1,19 @@ +# Quick Start + +Welcome to our quick-start tutorials which guide you through the necessary steps to + +1. Create a browser-to-browser messaging web application +2. Create and deploy a hosted service +3. Enhance a browser-to-browser application with a network-hosted service +4. Explore service composition and reuse with Aqua +5. Work through a decentralized price oracle example with Fluence and Aqua + +## Preparing Your Environment + +In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples): + +```bash +git clone https://github.com/fluencelabs/examples +``` + +If you encounter any problems or have suggestions, please open an issue or submit a PR. You can also reach out in [Discord](https://fluence.chat) or [Telegram](https://t.me/fluence\_project). diff --git a/research-papers-and-references.md b/research-papers-and-references.md new file mode 100644 index 0000000..88b2a49 --- /dev/null +++ b/research-papers-and-references.md @@ -0,0 +1,4 @@ +# Research, Papers And References + +* [Fluence Manifesto](https://fluence.network/manifesto.html) +* [Fluence Protocol](https://github.com/fluencelabs/rfcs/blob/main/0-overview.md) diff --git a/tutorials_tutorials/README.md b/tutorials_tutorials/README.md new file mode 100644 index 0000000..84ce15b --- /dev/null +++ b/tutorials_tutorials/README.md @@ -0,0 +1,2 @@ +# Tutorials + diff --git a/tutorials_tutorials/add-your-own-builtin.md b/tutorials_tutorials/add-your-own-builtin.md new file mode 100644 index 0000000..5ed954d --- /dev/null +++ b/tutorials_tutorials/add-your-own-builtin.md @@ -0,0 +1,136 @@ +# Add Your Own Builtins + +Some service functionalities have ubiquitous demand making them suitable candidates to be directly deployed to a peer node. The [Aqua distributed hash table](https://github.com/fluencelabs/aqua-dht) (DHT) is an example of builtin service. The remainder of this tutorial guides you through the steps necessary to create and deploy a Builtin service. + +In order to have a service available out-of-the-box with the necessary startup and scheduling scripts, we can take advantage of the Fluence [deployer feature](https://github.com/fluencelabs/fluence/tree/master/crates/builtins-deployer) for Node native services. This feature handles the complete deployment process including + +* module uploads, +* service deployment and +* script initialization and scheduling + +Note that the deployment process is a fully automated workflow requiring you to merely submit your service assets, i.e., Wasm modules and configuration scripts, in the appropriate format as a PR to the [Fluence](https://github.com/fluencelabs/fluence) repository. + +At this point you should have a solid grasp of creating service modules and their associated configuration files. + +Our first step is fork the [Fluence](https://github.com/fluencelabs/fluence) repo by clicking on the Fork button, upper right of the repo webpage, and follow the instructions to create a local copy. In your local repo copy, checkout a new branch with a new, unique branch name: + +``` +cd fluence +git checkout -b MyBranchName +``` + +In our new branch, we create a directory with the service name in the _deploy/builtin_ directory: + +``` +cd deploy/builtins +mkdir my-new-super-service +cd my-new-super-service +``` + +Replace _my_-_new-super-service_ with your service name. + +Now we can build and populate the required directory structure with your service assets. You should put your service files in the corresponding _my_-_new-super-service_ directory. + +## Requirements + +In order to deploy a builtin service, we need + +* the Wasm file for each module required for the service +* the blueprint file for the service +* the optional start and scheduling scripts + +### Blueprint + +Blueprints capture the service name and dependencies: + +```javascript +// example_blueprint.json +{ + "name": "my-new-super-service", + "dependencies": [ + "name:my_module_1", + "name:my_module_2", + "hash:Hash(my_module_3.wasm)" + ] +} +``` + +Where + +* name specifies the service's name and +* dependencies list the names of the Wasm modules or the Blake3 hash of the Wasm module + +In the above example, _my\_module\_i_ refers to ith module created when you compiled your service code + +{% hint style="info" %} +The easiest way to get the Blake3 hash of our Wasm modules is to install the [b3sum](https://crates.io/crates/blake3) utility: + +``` +cargo install b3sum +b3sum my_module_3.wasm +``` +{% endhint %} + +If you decide to use the hash approach, please use the hash for the config files names as well (see below). + +### **Start Script** + +Start scripts, which are optional, execute once after service deployment or node restarts and are submitted as _AIR_ files and may be accompanied by a _json_ file containing the necessary parameters. + +``` +;; on_start.air +(seq + (call relay ("some_service_alias" "some_func1") [variable1] result) + (call relay ("some_service_alias" "some_func2") [variable2 result]) +) +``` + +and the associated data file: + +```javascript +// on_start.json data for on_start.air +{ + "variable1" : "some_string", + "variable2" : 5, +} +``` + +### **Scheduling Script** + +Scheduling scripts allow us to decouple service execution from the client and instead can rely on a cron-like scheduler running on a node to trigger our service(s). + +### Directory Structure + +Now that we got our requirements covered, we can populate the directory structure we started to lay out at the beginning of this section. As mentioned above, service deployment as a builtin is an automated workflow one our PR is accepted. Hence, it is imperative to adhere to the directory structure below: + +``` +-- builtins + -- {service_alias} + -- scheduled + -- {script_name}_{interval_in_seconds}.air [optional] + -- blueprint.json + -- on_start.air [optional] + -- on_start.json [optional] + -- {module1_name}.wasm + -- {module1_name}_config.json + -- Hash(module2_name.wasm).wasm + -- Hash(module2_name.wasm)_config.json + ... +``` + +For a complete example, please see the [aqua-dht](https://github.com/fluencelabs/fluence/tree/master/deploy/builtins/aqua-dht) builtin: + +``` +fluence + --deploy + --builtins + --aqua-dht + -aqua-dht.wasm + -aqua-dht_config.json + -blueprint.json + -scheduled + -sqlite3.wasm # or 558a483b1c141b66765947cf6a674abe5af2bb5b86244dfca41e5f5eb2a86e9e.wasm + -sqlite3_config.json # or 558a483b1c141b66765947cf6a674abe5af2bb5b86244dfca41e5f5eb2a86e9e_config.json +``` + +which is based on the [eponymous](https://github.com/fluencelabs/aqua-dht) service project. diff --git a/tutorials_tutorials/curl-as-a-service.md b/tutorials_tutorials/curl-as-a-service.md new file mode 100644 index 0000000..18a9378 --- /dev/null +++ b/tutorials_tutorials/curl-as-a-service.md @@ -0,0 +1,106 @@ +# cUrl As A Service + +### Overview + +[Curl](https://curl.se) is a widely available and used command-line tool to receive or send data using URL syntax. Chances are, you probably just used it when you set up your Fluence development environment. For Fluence services to be able to interact with the world, cUrl is one option to facilitate https calls. Since Fluence modules are Wasm IT modules, cUrl cannot not be a service intrinsic. Instead, the curl command-line tool needs to be made available and accessible at the node level. And for Fluence services to be able to interact with Curl, we need to code a cUrl adapter taking care of the mounted (cUrl) binary. + +### Adapter Construction + +The work for the cUrl adapter has been fundamentally done and is exposed by the Fluence Rust SDK. As a developer, the task remaining is to instantiate the adapter in the context of the module and services scope. The following code [snippet](https://github.com/fluencelabs/marine/tree/master/examples/url-downloader/curl\_adapter) illustrates the implementation requirement. It is a part of a larger service, [url-downloader](https://github.com/fluencelabs/marine/tree/master/examples/url-downloader). + +```rust +use marine_rs_sdk::marine; + +use marine_rs_sdk::WasmLoggerBuilder; +use marine_rs_sdk::MountedBinaryResult; + +pub fn main() { + WasmLoggerBuilder::new().build().unwrap(); +} + +#[marine] +pub fn download(url: String) -> String { + log::info!("download called with url {}", url); + + let result = unsafe { curl(vec![url]) }; + String::from_utf8(result.stdout).unwrap() +} + +#[marine] +#[link(wasm_import_module = "host")] +extern "C" { + fn curl(cmd: Vec) -> MountedBinaryResult; +} +``` + +with the following dependencies necessary in the Cargo.toml: + +```rust +marine-rs-sdk = { version = "=0.6.11", features = ["logger"] } +log = "0.4.8" +``` + +We are basically linking the [external](https://doc.rust-lang.org/std/keyword.extern.html) cUrl binary and are exposing access to it as a `marine` interface called download. + +### Code References + +* [Mounted binaries](https://github.com/fluencelabs/fce/blob/c559f3f2266b924398c203a45863ebf2fb9252ec/fluence-faas/src/host\_imports/mounted\_binaries.rs) +* [cUrl](https://github.com/curl/curl) + +### Service Construction + +In order to create a valid Fluence service, a service configuration is required. + +``` +modules_dir = "target/wasm32-wasi/release" + +[[module]] + name = "curl_adapter" + logger_enabled = true + + [module.mounted_binaries] + curl = "/usr/bin/curl" +``` + +We are specifying the location of the Wasm file, the import name of the Wasm file, some logging housekeeping, and the mounted binary reference with the command-line call information. + +### Remote Service Creation + +```bash +cargo new curl_adapter +cd curl_adapter +# copy the above rust code into src/main +# copy the specified dependencies into Cargo.toml +# copy the above service configuration into Config.toml + +marine build --release +``` + +You should have the Fluence module cur-service.wasm in `target/wasm32-wasi/release` . We can test our service with `mrepl`. + +### Service Testing + +Running the REPL, we use the `interface` command to list all available interfaces and the `call` command to run a method. For our purposes, we furnish the [https://duckduckgo.com/?q=Fluence+Labs](https://duckduckgo.com/?q=Fluence+Labs) url to give the the curl adapter a workout. + +```bash +mrepl Config.toml +Welcome to the Marine REPL (version 0.9.1) +Minimal supported versions + sdk: 0.6.0 + interface-types: 0.20.0 + +app service was created with service id = ee63d3ae-304a-42bb-9a2d-4a4dc056a68b +elapsed time 76.5175ms + +1> interface +Loaded modules interface: + +curl_adapter: + fn download(url: string) -> string + +2> call curl_adapter download ["https://duckduckgo.com/?q=Fluence+Labs"] +result: String("Fluence Labs at DuckDuckGo
") + elapsed time: 356.953459ms + +3> +``` diff --git a/tutorials_tutorials/recipes_setting_up.md b/tutorials_tutorials/recipes_setting_up.md new file mode 100644 index 0000000..3bba627 --- /dev/null +++ b/tutorials_tutorials/recipes_setting_up.md @@ -0,0 +1,80 @@ +# Setting Up Your Environment + +In order to develop within the Fluence solution, [Node](https://nodejs.org/en/), [Rust](https://www.rust-lang.org/tools/install) and small number of tools are required. + +### NodeJs + +Download the \[installer]\([https://nodejs.org/en/download/](https://nodejs.org/en/download/)) for your platform and follow the instructions. + +### Rust + +If you're on Linux, MacOS or other Unix-like, you can install Rust like this: + +```bash +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh +``` + +If you're on other platform, please see [rustup.rs](https://rustup.rs) for instructions. + +Once Rust is installed, we need to expand the toolchain and include [nightly build](https://rust-lang.github.io/rustup/concepts/channels.html) and the [Wasm](https://doc.rust-lang.org/stable/nightly-rustc/rustc\_target/spec/wasm32\_wasi/index.html) compile target. + +```bash +rustup install nightly +rustup target add wasm32-wasi +``` + +To keep Rust and the toolchains updated: + +```bash +rustup self update +rustup update +``` + +There are a number of good Rust installation and IDE integration tutorials available. [DuckDuckGo](https://duckduckgo.com) is your friend but if that's too much effort, have a look at [koderhq](https://www.koderhq.com/tutorial/rust/environment-setup/). Please note, however, that currently only VSCode is supported with Aqua syntax support. + +### Aqua Tools + +The Aqua compiler and standard library and be installed via npm: + +``` +npm -g install @fluencelabs/aqua +npm -g install @fluencelabs/aqua-lib +``` + + + +If you are a VSCode user, note that am Aqua syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension. + +![](https://gblobscdn.gitbook.com/assets%2F-MbmEhQUL-bljop\_DzuP%2F-MdMDybZMQJ5kUjN4zhr%2F-MdME2UUjaxKs6pzcDLH%2FScreen%20Shot%202021-06-29%20at%201.06.39%20PM.png?alt=media\&token=812fcb5c-cf28-4240-b072-a51093d0aaa4) + +Moreover, the aqua-playground provides a ready to go Typescript template and Aqua example. In a directory of you choice: + +``` +git clone git@github.com:fluencelabs/aqua-playground.git +``` + +### Marine Tools + +Fluence provides several tools to support developers. `marine` is the command line compiler required to compile Rust modules to the necessary wasm32-wasi target. `mrepl`, on the other hand, is a command line tool providing access to the Marine runtime to test and experiment with marine modules and services locally: + +```bash +cargo install marine +cargo +nightly install mrepl +``` + +### Aqua Compiler And CLI Tool + +Fluence `aqua` provides both compile and cli tools for the lifecycle manage management of distributed services including deployment and execution tools. + +``` +npm -g install @fluencelabs/aqua +``` + +### Fluence JS + +For frontend development, the Fluence [JS](https://github.com/fluencelabs/fluence-js) is currently the favored, and only, tool. + +```bash +npm install @fluencelabs/fluence +``` + diff --git a/tutorials_tutorials/tutorial_run_local_node.md b/tutorials_tutorials/tutorial_run_local_node.md new file mode 100644 index 0000000..5011a57 --- /dev/null +++ b/tutorials_tutorials/tutorial_run_local_node.md @@ -0,0 +1,245 @@ +# Deploy A Local Fluence Node + +A significant chunk of developing and testing of Fluence services can be accomplished on an isolated, local node. In this brief tutorial we set up a local, dockerized Fluence node and test its functionality. In subsequent tutorials, we cover the steps required to join an existing network or how to run your own network. + +The fastest way to get a Fluence node up and running is to use [docker](https://docs.docker.com/get-docker/): + +```bash +docker run -d --name fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 5001:5001 -p 18080 fluencelabs/fluence +``` + +where the `-d` flag runs the container in detached mode, `-e` flag sets the environment variables, `-p` flag exposes the ports: 7777 is the tcp port, 9999 the websocket port, 5001 the ipfs port and, 18080 the Prometheus port. Note that Prometheus is called with /`metrics` , e.g., `http://127.0.0.1:18080/metrics` . + +Once the container is up and running, we can tail the log (output) with + +``` +docker logs -f fluence +``` + +Which gives os the logged output: + +```bash +[2021-12-02T19:42:20.734559Z INFO particle_node] + +-------------------------------------------------+ + | Hello from the Fluence Team. If you encounter | + | any troubles with node operation, please update | + | the node via | + | docker pull fluencelabs/fluence:latest | + | | + | or contact us at | + | github.com/fluencelabs/fluence/discussions | + +-------------------------------------------------+ + +[2021-12-02T19:42:20.734599Z INFO server_config::resolved_config] Loading config from "/.fluence/v1/Config.toml" +[2021-12-02T19:42:20.734842Z INFO server_config::keys] Generating a new key pair to "/.fluence/v1/builtins_secret_key.ed25519" +[2021-12-02T19:42:20.735133Z INFO server_config::keys] Generating a new key pair to "/.fluence/v1/secret_key.ed25519" +[2021-12-02T19:42:20.735409Z WARN server_config::defaults] New management key generated. ed25519 private key in base64 = M2sMsy5qguJIEttNct1+OBmbMhVELRUzBX9836A+yNE= +[2021-12-02T19:42:20.736364Z INFO particle_node] AIR interpreter: "/.fluence/v1/aquamarine_0.16.0-restriction-operator.9.wasm" +[2021-12-02T19:42:20.736403Z INFO particle_node::config::certificates] storing new certificate for the key pair +[2021-12-02T19:42:20.736589Z INFO particle_node] node public key = 3iMsSHKmtioSHoTudBAn5dTtUpKGnZeVGvRpEV1NvVLH +[2021-12-02T19:42:20.736616Z INFO particle_node] node server peer id = 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +[2021-12-02T19:42:20.739248Z INFO particle_node::node] Fluence listening on ["/ip4/0.0.0.0/tcp/7777", "/ip4/0.0.0.0/tcp/9999/ws"] + +``` + + + +For future interaction with the node, we need to retain the server peer id \`12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D\`, which may be different for you. + +And if you feel the need to snoop around the container: + +```bash +docker exec -it fluence bash +``` + +will get you in. + +Now that we have a local node, we can use `aqua cli` to interact with it. From the Quick Start, you may recall that we need the node-id and node-addr: + +* node-id: `12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D` +* node-addr: `/ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D` + +Let's inspect our node and check for any available modules and interfaces: + +``` +aqua remote list_modules \ + --addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +``` + +Let's us check on available modules and gives us a list of the builtin modules: + +```bash +Your peerId: 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +[ + { + "config": { + "mem_pages_count": 100 + }, + "hash": "558a483b1c141b66765947cf6a674abe5af2bb5b86244dfca41e5f5eb2a86e9e", + "name": "sqlite3" + }, + { + "config": { + "logger_enabled": true, + "mounted_binaries": { + "ipfs": "/usr/bin/ipfs" + } + }, + "hash": "f72aeaaef7075b8fbf09e101ba82e79cb08abefd0b6e602538bd440ff17c2329", + "name": "ipfs_effector" + }, + { + "config": {}, + "hash": "82353f3ae7cab489b158f6b602acd82c603f0550ed56c7edaa77823a08596d12", + "name": "trust-graph" + }, + { + "config": {}, + "hash": "1a5a7286dc29b76be4752f4cacac8c0122eea9f1d370d7777bcc51493bf3b6b7", + "name": "sqlite3" + }, + { + "config": {}, + "hash": "e6454e0a0b5da5ab3c25f5190cea6d24189de34eaa2f9bce508db5a07ed1a465", + "name": "aqua-dht" + }, + { + "config": { + "logger_enabled": true + }, + "hash": "96fc75aad39341626e7ebf610003e3be7c6c6b377281be4a93bb8205019223b2", + "name": "ipfs_pure" + } +] +``` + +And checking on available interfaces: + +``` +aqua remote list_interfaces \ + --addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +``` + +Results in: + +``` +Your peerId: 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +[] +``` + +Since we just initiated the node, we expect no modules and no interfaces and the `fldist` queries confirm our expectations. To further explore and validate the node, we can create a small [greeting](https://github.com/fluencelabs/fce/tree/master/examples/greeting) service. + +```bash +mkdir fluence-greeter +cd fluence-greeeter +# download the greeting.wasm file into this directory: +# https://github.com/fluencelabs/marine/blob/master/examples/greeting/artifacts/greeting.wasm -- Download button to the right +echo '{ "name":"greeting"}' > greeting_cfg.json +``` + +We just grabbed the greeting Wasm file from the Fluence repo and created a service configuration file, `echo_greeter_deploy_cfg.json`, which allow us to create a new GreetingService: + +```bash +aqua remote deploy_service \ + --addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ + --data-path configs/echo_greeter_deploy_cfg.json \ + --service echo-greeter +``` + +Which gives us the service id: + +``` +our peerId: 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +"Going to upload a module..." +2022.02.22 01:42:42 [INFO] created ipfs client to /ip4/127.0.0.1/tcp/5001 +2022.02.22 01:42:42 [INFO] connected to ipfs +2022.02.22 01:42:44 [INFO] file uploaded +"Now time to make a blueprint..." +"Blueprint id:" +"de3e242cb4489f2ed04b4ad8ff0e7cee701b75d86422c51b691dfeee8ab4ed92" +"And your service id is:" +"0b42ec01-c79e-438b-bded-c0b967a532c6" +``` + +We now have a greeting service running on our node. As always, take note of the service id and we can check on the deployed interface with: + +```bash +aqua remote get_interface \ + --addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ + --id 0b42ec01-c79e-438b-bded-c0b967a532c6 +``` + +Which now lists: + +``` +Your peerId: 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +{ + "function_signatures": [ + { + "arguments": [ + [ + "name", + "string" + ], + [ + "greeter", + "bool" + ] + ], + "name": "greeting", + "output_types": [ + "string" + ] + } + ], + "record_types": [] +} +``` + +Writing a small Aqua script allows us to use the service: + +```python +service GreetingService("service-id"): + greeting: string -> string + +func greeting(name:string, node:string, greeting_service_id: string) -> string: + on node: + GreetingService greeting_service_id + res <- GreetingService.greeting(name) + <- res +``` + +We run the script with [`aqua`](https://doc.fluence.dev/aqua-book/getting-started/quick-start) + +``` +aqua run \ + -a /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ + -i aqua/ \ + -f 'greeting("Fluence", true, "12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D", "04ef4459-474a-40b5-ba8d-1e9a697206ab")' +``` + +```bash +Your peerId: 12D3KooWAMTVBjHfEnSF54MT4wkXB1CvfDK3XqoGXt7birVsLFj6 +[ + "Hi, Fluence" +] +``` + +Yep, our node and the tools are working as expected. Going back to the logs, we can further verify the script execution: + +```bash +docker logs -f fluence +``` + +And check from the bottom up: + +``` + +[2021-03-12T02:42:51.041267Z INFO aquamarine::particle_executor] Executing particle 14db3aff-b1a9-439e-8890-d0cdc9a0bacd +[2021-03-12T02:42:51.041927Z INFO particle_closures::host_closures] Executed host call "64551400-6296-4701-8e82-daf0b4e02751" "greeting" (96us 700ns) +[2021-03-12T02:42:51.046652Z INFO particle_node::network_api] Sent particle 14db3aff-b1a9-439e-8890-d0cdc9a0bacd to 12D3KooWLFqJwuHNe2kWF8SMgX6cm24L83JUADFcbrj5fC1z3b21 @ [/ip4/172.17.0.1/tcp/61636/ws] +``` + +Looks like our node container and logging is up and running and ready for your development use. As the Fluence team is rapidly developing, make sure you stay up to date. Check the repo or [Docker hub](https://hub.docker.com/r/fluencelabs/fluence) and update with `docker pull fluencelabs/fluence:latest`. + +Happy composing!