* chore(deps): update rust crate lalrpop to 0.20.0
* Update lalrpop-util to compatible version
* Fix code broken by the API change
* Regenerate parsers
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Ivan Boldyrev <ivan@fluence.one>
* feat: set nodejs target for air-beautify
BREAKING CHANGE: this package is now intended for use in nodejs
* change year
* Update tools/wasm/air-beautify-wasm/src/lib.rs
* chore(testing-framework)!: fix WASM test runner
Native mode was used before because some package used native runner
for its tests.
This PR allows to explicitly select test runner for tests. Many testing-framework
types are now parametrized with a runner type with almost compatible defaults.
* chore(testing-framework): Add `ReleaseWasmAirRunner`
* chore(testing-framework)!: Rename `AirScriptExecutor::simple` to `AirScriptExecutor::from_annotated`.
* chore: release master
* chore: Bump air-interpreter version to 0.40.0
* feat(aquavm-air): Set minimal supported version to 0.40.0
---------
Co-authored-by: Ivan Boldyrev <ivan@fluence.one>
* feat(avm-server)!: keypair and particle ID arguments
Add `&fluence_keypair::KeyPair` argument to `AVM::call` and
`AVMRunner::call`. This value is further forwarded in a deconstructed
form to WASM Air interpreter, but is not used there yet. Also,
`AVMRunner::call` gets `particle_id: String` argument.
feat(air-interpreter)!: `invoke` methods have three new arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air): `air::execute_air` has two three arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air-cli)!: add `--random-key`/`--ed25519-key file` options to AIR CLI.
* feat(avm-server)!: Add `RunnerError::KeypairError`
* chore(bench): Add signature performance benchmarks
These benchmarks contain valid signature, so they should work with
verification out of the box.
---------
Co-authored-by: Artsiom Shamsutdzinau <shamsartem@gmail.com>
Co-authored-by: folex <0xdxdy@gmail.com>
* chore(bench): update benchmark data
After recent data format changes.
* feat(aquavm-air-cli): send memory size to logger
* feat(performance_metering): report memory size
`performance_metering` collects memory sizes reported by `air run` and
reports minimal and maximal values.
* feat(air-interpreter-signatures): use (de)serialize_with
Wrapper types do not hold strings anymore, but use `(de)serialize_with`
serde attributes to parse keys and signatures during parse time.
`PublicKey` and `Signature` traits implement `Deref` trait that returns
the inner value.
A peer signs the multiset of call results and canon results it has produced.
New field signatures, a map from peer public key to signature, is added to the interpreter data.
Signatures verification is yet to be done.
* `ValueAggregate` refactoring
0. Service results, canon results and literals are constructed as
separate types that are further wrapped with `ValueAggregate`.
1. `ValueAggregate` is enum that contains all the provenance info.
2. Construction methods get provenance information as well.
* Rename CID state field
Prepare to adding a canon CID field: rename `canon_tracker`/`canon_store`
to `canon_element_tracker`/`canon_element_store`.
* Add canon result store/tracker
* Rename some structs that have CIDs inside
Reflect explicitly that they contain CIDs inside:
`CanonResultAggregate` -> `CanonResultCidAggregate`
`ServiceResultAggregate` -> `ServiceResultCidAggregate`
---------
Co-authored-by: Mike Voronov <michail.vms@gmail.com>
* fix(air): Fold-over-scalar values had wrong lambda
Previously, iterator values in fold over scalar inherited tetraplet from
the scalar. But for security guarantees, they should have `.[N]` lambda
added, where N is an element index.
When fold iterates over canon or stream, elements keep their original
tetraplet; tests for that is added.
* feat(aquavm-air-cli): `run` fails if AquaVM fails
Unless `run --no-fail` is provided. It will make benchmarks fail
on errors, unless you provide `--no-fail` to specific benchmark.
* Fix dashboard and network_explore benches
* Convert benchmark data to new format
* `performance_metering`: use dirs only
Ordinary files like README.md are not considered to be a benchmark.
* Update `benches/performance_metering/README.md`
* Fix performance report
Looks like performance reports was merged in wrong order: data is not
sorted by machine ID. The sorting is needed for stable diffs.
* Run benchmarks on Macbook Air M1