mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2024-12-04 23:30:23 +00:00
GitBook: [#267] No subject
This commit is contained in:
parent
4d6b0a5d89
commit
52b46faeed
@ -26,14 +26,14 @@ git clone https://github.com/fluencelabs/examples
|
|||||||
|
|
||||||
#### Timestamp Acquisition
|
#### Timestamp Acquisition
|
||||||
|
|
||||||
Each Fluence peer, i.e. node in the Fluence peer-to-peer network, has the ability to provide a timestamp from a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L127). In Aqua, we can call a [timestamp function](https://github.com/fluencelabs/fluence/blob/527e26e08f3905e53208b575792712eeaee5deca/particle-closures/src/host\_closures.rs#L124) with the desired granularity, i.e., seconds or milliseconds for further processing:
|
Each Fluence peer, i.e. node in the Fluence peer-to-peer network, has the ability to provide a timestamp from a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L127). In Aqua, we can call a [timestamp function](https://github.com/fluencelabs/fluence/blob/527e26e08f3905e53208b575792712eeaee5deca/particle-closures/src/host\_closures.rs#L124) with the desired granularity, i.e., [seconds](https://github.com/fluencelabs/aqua-lib/blob/3298a0e23cfc67aca5b896798f8fb4008bd0a74f/builtin.aqua#L133) or [milliseconds](https://github.com/fluencelabs/aqua-lib/blob/3298a0e23cfc67aca5b896798f8fb4008bd0a74f/builtin.aqua#L130) for further processing:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
-- aqua timestamp sourcing
|
-- aqua timestamp sourcing
|
||||||
on peer:
|
on node:
|
||||||
ts_ms_result <- peer.timestamp_ms()
|
ts_ms_result <- Peer.timestamp_ms()
|
||||||
-- or
|
-- or
|
||||||
ts_sec_result <- peer.timestamp_sec()
|
ts_sec_result <- Peer.timestamp_sec()
|
||||||
-- ...
|
-- ...
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -43,9 +43,9 @@ In order to decentralize our timestamp oracle, we want to poll multiple peers in
|
|||||||
-- multi-peer timestamp sourcing
|
-- multi-peer timestamp sourcing
|
||||||
-- ...
|
-- ...
|
||||||
results: *u64
|
results: *u64
|
||||||
for peer <- many_peers_list par:
|
for node <- many_peers_list par:
|
||||||
on peer:
|
on peer:
|
||||||
results <- peer.timestamp_ms()
|
results <- Peer.timestamp_ms()
|
||||||
-- ...
|
-- ...
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -64,7 +64,7 @@ The last thing to pin down concerning our timestamp acquisition is which peers t
|
|||||||
for node <- nodes par:
|
for node <- nodes par:
|
||||||
on node:
|
on node:
|
||||||
try:
|
try:
|
||||||
results <- node.timestamp_ms()
|
results <- Peer.timestamp_ms()
|
||||||
-- ...
|
-- ...
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -252,7 +252,7 @@ func ts_oracle_with_consensus(tolerance: u32, threshold: f64, err_value:u64, nod
|
|||||||
<- consensus, dead_peers -- 12
|
<- consensus, dead_peers -- 12
|
||||||
```
|
```
|
||||||
|
|
||||||
That script is probably a little more involved than what you've seen so far. So let's work through the script: In order to get out set of timestamps, we determine the Kademlia neighbors (1) and then proceed to request a timestamp from each of those peers (2) in parallel (3). In an ideal world, each peers responds with a timestamp and the stream variable `res` (4) fills up with the 20 values from the twenty neighbors, which we then fold (5) and push to our consensus service (6). Alas, life in distributed isn't quite that simple since there are no guarantees that a peer is actually available to connect or provide a service response. Since we may never actually connect to a peer (7), we can't expect an error response meaning that we get a silent fail at (2) and no write to the stream `res`. Subsequently, this leads to the fail of the fold operation (5) since fewer than the expected twenty items are in the stream and the operation (5) ends up timing out waiting for a never-to-arrive timestamp.
|
That script is probably a little more involved than what you've seen so far. So let's work through the script: In order to get out set of timestamps, we determine the Kademlia neighbors (1) and then proceed to request a timestamp from each of those peers (2) in parallel (3). In an ideal world, each peers responds with a timestamp and the stream variable `res` (4) fills up with the 20 values from the twenty neighbors, which we then fold (5) and push to our consensus service (6). Alas, life in distributed isn't quite that simple since there are no guarantees that a peer is actually available to connect or provide a service response. Since we may never actually connect to a peer (7), we can't expect an error response meaning that we get a silent fail at (2) and no write to the stream `res`. Subsequently, this leads to the fail of the fold operation (5) since fewer than the expected twenty items are in the stream and the operation (5) ends up timing out waiting for a never-to-arrive timestamp.
|
||||||
|
|
||||||
In order to deal with this issue, we introduce a sleep operation (8) with the builtin [Peer.timeout](https://github.com/fluencelabs/aqua-lib/blob/1193236fe733e75ed0954ed26e1234ab7a6e7c53/builtin.aqua#L135) and run that in parallel to the attempted connection for peer `n` (3) essentially setting up a race condition to write to the stream: if the peer (`on n`, 7) behaves, we write the timestamp to `res`(2) and make a note of that successful operation (9); else, we write a dummy value, i.e., `err_value`, into the stream (10) and make a note of the delinquent peer (11). Recall, we filter out the dummy `err_value` at the service level.
|
In order to deal with this issue, we introduce a sleep operation (8) with the builtin [Peer.timeout](https://github.com/fluencelabs/aqua-lib/blob/1193236fe733e75ed0954ed26e1234ab7a6e7c53/builtin.aqua#L135) and run that in parallel to the attempted connection for peer `n` (3) essentially setting up a race condition to write to the stream: if the peer (`on n`, 7) behaves, we write the timestamp to `res`(2) and make a note of that successful operation (9); else, we write a dummy value, i.e., `err_value`, into the stream (10) and make a note of the delinquent peer (11). Recall, we filter out the dummy `err_value` at the service level.
|
||||||
|
|
||||||
@ -352,7 +352,7 @@ Your peerId: 12D3KooWP7vAR462JgoagUzGA8s9YccQZ7wsuGigFof7sajiGThr
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
We encourage you to experiment and tweak the parameters both for the consensus algorithm and the timeout settings. Obviously, longer routes make for more timestamp variance even if each timestamp called is "true."
|
We encourage you to experiment and tweak the parameters both for the consensus algorithm and the timeout settings. Obviously, longer routes make for more timestamp variance even if each timestamp called is "true."
|
||||||
|
|
||||||
### Summary
|
### Summary
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user