| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now the CI takes ~30 minutes; the goal is to have it ending in
< 15 min.
The basic trick is to run the longer jobs (no_x86_64 and masan) only
with the recently updated pcaps. The same jobs will run again on schedule
(every night) testing all the traces.
This way the CI will be "green" (hopefully!) earlier while pushing new
commit/PR; full tests are simply delayed.
Details: when `NDPI_TEST_ONLY_RECENTLY_UPDATED_PCAPS` is set,
`tests/do.sh` checks only the latest 10 pcaps (i.e. the more recent pcap
added/updated) for *every* configuration.
Notes that no_x86_64 and masan jobs run twice: when pushing/merging and
on schedule (every night)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move ThreadSanitizer job to the scheduled jobs (once a day): all our tests
are intrinsically mono-thread and this job takes quite some time
* Two explicit jobs to test LTO and Gold linker, used by oss-fuzz
* Two explicit jobs for Windows (with msys2)
* Run address sanitizer only on the 4 main jobs: newest/oldest gcc/clang
* Reduce the time used by fuzzing jobs. Note that oss-fuzz is
continuosly fuzzing our code!
* Move the no x86_64 jobs to a dedicated file
This way, the main matrix is a little bit simpler and the CI jobs last a
little shorter
|
| |
|
|
|
|
|
|
|
| |
w/o signing those (#2616)
* can be used for local and CI builds
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
| |
It is deprecated and will be removed from GitHub.
See: https://github.com/actions/runner-images/issues/10721
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without the `-fsanitize-memory-track-origins` flag, MSAN job is ~30%
faster. Since this flag is useful only while debugging (and not to
simply discover memory issues), avoid it on the CI. Note that, by
default it is still enabled by default.
Right now, MingW runs on *every* ubuntu builds: limit it only to the
standard matrix (i.e. ubuntu 20.04, 22.04, 24.04 with default
configuration), without any sanitizers (note that MingW doesn't support
*san anyway).
armhf job is by far the longest job in the CI: remove asan configuration
to make it faster. Note that we already have a lot of different jobs (on
x86_64) with some sanitizers, and that the other 2 jobs on arm/s390x don't
have asan support anyway.
If we really, really want a job with arm + asan we can add it as a
async/scheduled job.
Remove an old workaround for ubuntu jobs
Avoid installing packages needed only for the documentation
About `check_symbols.sh` script: even if uses the compiled library/objects,
it basicaly only checks if we are using, in the source code, same functions
that we shoudn't. We don't need to perform the same kind of check so
many times..
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TODO: enable parallel tests when using docker with no-x86_64 archs.
When I tried the obviuos solutions:
```
NDPI_FORCE_PARALLEL_UTESTS=1 NDPI_SKIP_PARALLEL_BAR=1 make check VERBOSE=1
```
I got:
```
Run configuration "caches_cfg" [--cfg=lru.ookla.size,0 --cfg=lru.msteams.ttl,1]
ookla.pcap /bin/sh: 1: run_single_pcap: not found
teams.pcap /bin/sh: 1: run_single_pcap: not found
Run configuration "caches_global" [--cfg=lru.ookla.scope,1 --cfg=lru.bittorrent.scope,1 --cfg=lru.stun.scope,1 --cfg=lru.tls_cert.scope,1 --cfg=lru.mining.scope,1 --cfg=lru.msteams.scope,1 --cfg=lru.stun_zoom.scope,1]
bittorrent.pcap /bin/sh: 1: run_single_pcap: not found
lru_ipv6_caches.pcapng /bin/sh: 1: run_single_pcap: not found
mining.pcapng /bin/sh: 1: run_single_pcap: not found
...
```
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GitHub switched "macos-latest" from "macos-12" to "macos-14", which is only
on ARM64!
https://github.com/actions/runner/issues/3256
https://github.blog/changelog/2024-01-30-github-actions-macos-14-sonoma-is-now-available/
However we are having some issues build nDPI on macos-14 with external
libraries:
```
configure: error: libgpg-error required (because of --with-local-libgcrypt) but not found or too old.
```
See: https://github.com/ntop/nDPI/actions/runs/8869020568/job/24350356867
```
ndpi_utils.c:69:10: fatal error: 'pcre2.h' file not found
^~~~~~~~~
1 error generated.
```
See: https://github.com/ntop/nDPI/actions/runs/8869020568/job/24349242251
Everything is still fine with macos-14 and no external dependencies
As workaround, test only macos-12 and macos-13 in our main matrix.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* added `-Wextra` to the CI
```
In file included from ndpi_bitmap64_fuse.c:31:
./third_party/include/binaryfusefilter.h:31:24: error: unused function 'binary_fuse_rotl64' [-Werror,-Wunused-function]
static inline uint64_t binary_fuse_rotl64(uint64_t n, unsigned int c) {
..snip..
```
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Integrated RoaringBitmap v3
* Renamed ndpi_bitmap64 ro ndpi_bitmap64_fuse
* Fixes to ndpi_bitmap for new roaring library
* Fixes for bitmap serialization
* Fixed format
* Warning fix
* Conversion fix
* Warning fix
* Added check for roaring v3 support
* Updated file name
* Updated path
* Uses clang-9 (instead of clang-7) for builds
* Fixed fuzz_ds_bitmap64_fuse
* Fixes nDPI printf handling
* Disabled printf
* Yet another printf fix
* Cleaup
* Fx for compiling on older platforms
* Fixes for old compilers
* Initialization changes
* Added compiler check
* Fixes for old compilers
* Inline function is not static inline
* Added missing include
|
|
|
|
|
|
|
| |
* `ndpi_typedefs.h`: requires to include `ndpi_config.h` for the `HAVE_STRUCT_TIMESPEC` check
That will never happen, because `USE_GLOBAL_CONTEXT` is defined inside `ndpi_config.h`.
It's better to use `CFLAGS` to achieve the same.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
| |
See: https://github.com/actions/runner-images/issues/9491
|
|
|
|
| |
Workaroud for Homebrew's python link error
See: https://github.com/Homebrew/homebrew-core/issues/165793#issuecomment-1991817938
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a simple job with macos-14 on M1.
https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/)
There are some issues with external dependencies (they are installed but
autoconf script doens;t find them) so keep it simple.
On macos-13 it seems that:
* there is no `realpath` program (even if coreutils has been
installed...)
* most of the filesystem is read only (we can't write on /usr/lib).
So I change
```
make install DESTDIR=$(realpath _install)
ls -alhHR _install
```
to
```
DESTDIR=/tmp/ndpi make install
ls -alhHR /tmp/ndpi
```
for all the jobs
Fix a warning on GitHub logs:
```
Node.js 16 actions are deprecated. Please update the following actions
to use Node.js 20: actions/checkout@v3. For more information see:
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the concept of "global context".
Right now every instance of `struct ndpi_detection_module_struct` (we
will call it "local context" in this description) is completely
independent from each other. This provide optimal performances in
multithreaded environment, where we pin each local context to a thread,
and each thread to a specific CPU core: we don't have any data shared
across the cores.
Each local context has, internally, also some information correlating
**different** flows; something like:
```
if flow1 (PeerA <-> Peer B) is PROTOCOL_X; then
flow2 (PeerC <-> PeerD) will be PROTOCOL_Y
```
To get optimal classification results, both flow1 and flow2 must be
processed by the same local context. This is not an issue at all in the far
most common scenario where there is only one local context, but it might
be impractical in some more complex scenarios.
Create the concept of "global context": multiple local contexts can use
the same global context and share some data (structures) using it.
This way the data correlating multiple flows can be read/write from
different local contexts.
This is an optional feature, disabled by default.
Obviously data structures shared in a global context must be thread safe.
This PR updates the code of the LRU implementation to be, optionally,
thread safe.
Right now, only the LRU caches can be shared; the other main structures
(trees and automas) are basically read-only: there is little sense in
sharing them. Furthermore, these structures don't have any information
correlating multiple flows.
Every LRU cache can be shared, independently from the others, via
`ndpi_set_config(ndpi_struct, NULL, "lru.$CACHE_NAME.scope", "1")`.
It's up to the user to find the right trade-off between performances
(i.e. without shared data) and classification results (i.e. with some
shared data among the local contexts), depending on the specific traffic
patterns and on the algorithms used to balance the flows across the
threads/cores/local contexts.
Add some basic examples of library initialization in
`doc/library_initialization.md`.
This code needs libpthread as external dependency. It shouldn't be a big
issue; however a configure flag has been added to disable global context
support. A new CI job has been added to test it.
TODO: we should need to find a proper way to add some tests on
multithreaded enviroment... not an easy task...
*** API changes ***
If you are not interested in this feature, simply add a NULL parameter to
any `ndpi_init_detection_module()` calls.
|
|
|
|
|
| |
Change the working directory of `ndpiReader` in the Github Actions so
that it can load the domain suffix list during `domainsUnitTest()`
|
|
|
|
|
| |
Try using latest gcc and clang versions.
We still care about RHEL7: since handling a RHEL7 runner on GitHub is
quite complex, let try to use a similar version of gcc, at least
|
|
|
|
|
|
|
|
|
|
| |
Move from PCRE to PCRE2. PCRE is EOL and won't receive any security
updates anymore. Convert to PCRE2 by converting any function PCRE2 new
API.
Also update every entry in github workflows and README to point to the
new configure flag. (--with-pcre2)
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Refreshed the Belgium Gambling Site list data
Unfortunately some hostnames have been removed from that list,
which means they are disappearing from the `ndpi_gambling_match.c.inc`
file as well.
* build: added `libxml2-utils` (for `xmllint`)
* Included Gambling website data from the Polish `hazard.mf.gov.pl` list
The list contains over 30k gambling website hostnames as of today.
|
|
|
|
|
| |
* added CI check
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
| |
Add support for Facebook crawler
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CI duration is quite long: the longest jobs is the "Performance" one.
Try to reduce the overall duration: that job (and some others) will not
be triggered for each PR/commit anymore, but asynchronously, once a day
(this scheduling seems right since the frequency of the PR/commits in
the project).
It should be possibly to trigger them manually, via GUI, anyway.
Remove two identical jobs; we already tests ASAN with 4 different
compilers.
After 9eff0754 it is safe to reduce fuzzing time.
Bottom line: try to have as upper-time of CI tests the duration of the
fuzzing jobs
|
|
|
| |
See: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit add (optional) support for Link-Time-Optimization and Gold
linker.
This is the first, mandatory step needed to make nDPI compliant with
"introspector" sanitizer requirements in OSS-Fuzz: see
https://github.com/google/oss-fuzz/issues/8939
Gold linker is not supported by Windows and by macOS, so this feature is
disabled by default. It has been enable in CI in two linux targets
("latest" gcc and clang).
Fix some warnings triggered by LTO.
The changes in `src/lib/ndpi_serializer.c` seams reasonable.
However, the change in `tests/unit/unit.c` is due to the following
warning, which seems to be a false positive.
```
unit.c: In function ‘serializerUnitTest’:
ndpi_serializer.c:2258:13: error: ‘MEM[(struct ndpi_private_serializer *)&deserializer].buffer.size’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
unit.c:67:31: note: ‘MEM[(struct ndpi_private_serializer *)&deserializer].buffer.size’ was declared here
67 | ndpi_serializer serializer, deserializer;
| ^
ndpi_serializer.c:2605:10: error: ‘MEM[(struct ndpi_private_serializer *)&deserializer].status.buffer.size_used’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
unit.c:67:31: note: ‘MEM[(struct ndpi_private_serializer *)&deserializer].status.buffer.size_used’ was declared here
67 | ndpi_serializer serializer, deserializer;
```
Since this warning is triggered only with an old version of gcc and
`tests/unit/unit.c` is used only during the tests, the easiest fix has
been applied.
Some (unknown to me) combinations of OS and compiler trigger the
following warnings at linker time (with sanitizer and gold linker)
```
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_load1_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_load2_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_load4_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_load8_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_load16_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_store1_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_store2_asm'
/usr/bin/ld.gold: warning: Cannot export local symbol '__asan_report_store4_asm'
[..]
```
I have not found any references to this kind of message, with the only
exception of https://sourceware.org/bugzilla/show_bug.cgi?id=25975
which seems to suggest that these messages can be safely ignored.
In any case, the compilation results are sound.
Fix `clean` target in the Makefile in the `example` directory.
In OSS-Fuzz enviroments, `fuzz_ndpi_reader` reports a strange link error
(as always, when the gold linker is involved...).
It's come out that the culprit was the `tempnam` function: the code has
been changed to use `tmpfile` instead. No sure why... :(
Fuzzing target `fuzz_ndpi_reader.c` doesn't use `libndpiReader.a`
anymore: this way we can use `--with-only-libndpi` flag on Oss-Fuzz builds
as workaround for the "missing dependencies errors" described in
https://github.com/google/oss-fuzz/issues/8939
|
|
|
|
|
|
|
|
|
|
|
| |
GitHub is moving `ubuntu-latest` to `ubuntu-22.04`: update our
dependencies.
See: https://github.blog/changelog/2022-11-09-github-actions-ubuntu-latest-workflows-will-use-ubuntu-22-04/
This is the reason of the recent random failures in CI.
Update "newest" tested gcc to gcc-12.
Fix a memory error introduced in 557bbcfc5a5165c9eb43bbdd78435796239cd3c9
|
|
|
|
|
|
|
| |
```
The `set-output` command is deprecated and will be disabled soon.
Please upgrade to using Environment Files. For more information see:
https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
```
|
|
|
|
|
|
| |
Fix warnings on recent CI results; example:
https://github.com/ntop/nDPI/actions/runs/3455588082
See: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
|
|
|
|
|
|
| |
* add CI support via MSBuild
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
| |
Add one CI job testing nBPF
|
|
|
|
|
|
|
|
|
|
| |
ubuntu-18.04 is deprecated (ubuntu-latest points to 20.04).
macos-latest points to macos-11, so it makes sense to test macos-12,
too.
About the compilers, the general idea it to test the oldest and the
newest versions easily available: switch to gcc-11 and clang-14.
See: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
|
|
|
|
|
| |
* added static assert if supported, to complain if the flow struct changes
Signed-off-by: lns <matzeton@googlemail.com>
|
| |
|
|
|
|
|
|
|
| |
* CI fixes
* some build systems do not like that (e.g. OpenWrt)
* fixed some rrdtool related build warnings/errors
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
|
|
| |
* use -ltcmalloc_and_profiler and try to get rid of LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libprofiler.so
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
|
| |
Signed-off-by: lns <matzeton@googlemail.com>
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
| |
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
|
|
|
|
| |
GCC analyzer won't complain about possible use-after-free (false positive).
* tests/do.sh prints word diff's only once and not the same over and over again
* sync unit tests
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
|
|
| |
* make check great again (not so much)
* make doc/doc-view
* CI updates
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
| |
* Sync unit tests
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
| |
* Integrated Doxygen documentation into Sphinx
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
|
|
| |
This fixes some build/test issues resulting when using tarballs.
* nDPI uses autotools (especially autoconf) in a wrong way, see #1163
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
| |
* The warning itself looks like a bug
Signed-off-by: lns <matzeton@googlemail.com>
|
|
|
|
|
|
|
| |
* Removed Visual Studio leftovers. Maintaining an autotools project with VS integration requires some additional overhead.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
Signed-off-by: lns <matzeton@googlemail.com>
|
| |
|
| |
|
|
|
|
| |
packaging and CI integration)
|