| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Running unit tests is quite a bottleneck while developing or while
waiting for GitHub CI results...
Try to run the tests in parallel, using the `parallel` tool.
By default, tests still run one after the other, as usual; to enable
parallel execution you need `NDPI_FORCE_PARALLEL_UTESTS=1 ./tests/do.sh`
Please note that the output is quite different in parallel mode!
A big part of the script has been rewritten to avoid code dupication
between "serial" and "parallel" path
On my notebook:
```
ivan@ivan-Latitude-E6540:~/svnrepos/nDPI(parallel)$ time ./tests/do.sh
[...]
real 3m12,684s
[...]
ivan@ivan-Latitude-E6540:~/svnrepos/nDPI(parallel)$ time NDPI_FORCE_PARALLEL_UTESTS=1 ./tests/do.sh
[...]
real 0m58,463s
```
|
| |
|
|
|
|
|
| |
New implementation fails tests 11b, 12 and 13.
Revert to the original (BSD) implementation (with also some basic
parameters check)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IRC has its best times well behind, but there are still some servers
using it.
We should try to simplify the detection logic, still based on OpenDPI
logic.
Let's start with some easy changes:
* try to detect TLS connection via standard hostname/SNI matching,
removing an old heuristic (we have never had any trace matching it);
* add some basic server names;
* once we detect that the flow is IRC, we don't have to perform
anything else;
* remove HTTP stuff; real HTTP flows never trigger that data path
* use `ndpi_memmem()` when possible
|
|
|
| |
Co-authored-by: Ivan Nardi <12729895+IvanNardi@users.noreply.github.com>
|
| |
|
| |
|
|
|
|
| |
Add an explicit upper limit on the number of packets processed before
giving up.
|
| |
|
|
|
|
|
|
|
|
|
| |
There is some overlap between RTP and Raknet detection: give precedence
to RTP logic.
Consequences:
* Raknet might require a little bit more packets for some flows (not a
big issue)
* some very small (1-2 pkts) Raknet flows are not classified (not sure
what do do about that..)
|
| |
|
| |
|
|
|
| |
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
SCARINESS: 12 (1-byte-read-heap-buffer-overflow)
#0 0x557f3a5b5100 in ndpi_get_host_domain /src/ndpi/src/lib/ndpi_domains.c:158:8
#1 0x557f3a59b561 in ndpi_check_dga_name /src/ndpi/src/lib/ndpi_main.c:10412:17
#2 0x557f3a51163a in process_chlo /src/ndpi/src/lib/protocols/quic.c:1467:7
#3 0x557f3a469f4b in LLVMFuzzerTestOneInput /src/ndpi/fuzz/fuzz_quic_get_crypto_data.c:44:7
#4 0x557f3a46abc8 in NaloFuzzerTestOneInput (/out/fuzz_quic_get_crypto_data+0x4cfbc8)
```
Some notes about the leak: if the insertion into the uthash fails (because of an
allocation failure), we need to free the just allocated entry. But the only
way to check if the `HASH_ADD_*` failed, is to perform a new lookup: a bit
costly, but we don't use that code in the fast-path.
See also efb261a95c5a
Credits for finding the issues to Philippe Antoine (@catenacyber) and his
`nallocfuzz` fuzzing engine
See: https://github.com/catenacyber/nallocfuzz
See: https://github.com/google/oss-fuzz/pull/9902
|
| |
|
|
|
|
|
| |
* credits goes to Vladimir Gavrilov
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
|
| |
|
|
|
|
|
| |
- int ndpi_load_ipv4_ptree_file(ndpi_ptree_t *tree, const char *path, u_int16_t protocol_id);
- int ndpi_load_ipv6_ptree_file(ndpi_ptree_t *tree, const char *path, u_int16_t protocol_id);
|
|
|
|
| |
ndpi_domain_classify_xxx calls
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This cache was added in b6b4967aa, when there was no real Zoom support.
With 63f349319, a proper identification of multimedia stream has been
added, making this cache quite useless: any improvements on Zoom
classification should be properly done in Zoom dissector.
Tested for some months with a few 10Gbits links of residential traffic: the
cache pretty much never returned a valid hit.
|
|
|
|
|
| |
Deciding when a session starts and ends is responsability of the
applicationi (via its flow manager)i, not of the library.
BTW, the removed code is incomplete at beast
|
|
|
|
|
|
| |
See RFC8122: it is quite likely that STUN/DTLS/SRTP flows use
self-signed certificates
Follow-up of b287d6ec8
|
|
|
|
|
|
|
|
|
| |
Avoid code duplication between these two protocols.
We remove support for RTCP over TCP; it is quite rare to find this kind
of traffic and, more important, we have never had support for RTP
over TCP: we should try to add both detecion as follow-up.
Fix a message log in the LINE code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new values has been checked against the ones reported by Wireshark.
Found while fixing a Use-of-uninitialized-value error reported by
oss-fuzz
```
==7582==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x5a6549abc368 in ndpi_compute_ja4 ndpi/src/lib/protocols/tls.c:1762:10
#1 0x5a6549ab88a0 in processClientServerHello ndpi/src/lib/protocols/tls.c:2863:10
#2 0x5a6549ac1452 in processTLSBlock ndpi/src/lib/protocols/tls.c:909:5
#3 0x5a6549abf588 in ndpi_search_tls_tcp ndpi/src/lib/protocols/tls.c:1098:2
#4 0x5a65499c53ec in check_ndpi_detection_func ndpi/src/lib/ndpi_main.c:7215:6
```
See: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68449&q=ndpi&can=1&sort=-id
|
|
|
|
|
|
|
|
|
|
| |
eDonkey is definitely not as used as >10 years ago, but it seems it is
still active.
While having a basic TCP support seems easy, identification over UDP doesn't
work and it is hard to do it rightly (packets might be only 2 bytes long):
remove it.
Credits to V.G <v.gavrilov@securitycode.ru>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GitHub switched "macos-latest" from "macos-12" to "macos-14", which is only
on ARM64!
https://github.com/actions/runner/issues/3256
https://github.blog/changelog/2024-01-30-github-actions-macos-14-sonoma-is-now-available/
However we are having some issues build nDPI on macos-14 with external
libraries:
```
configure: error: libgpg-error required (because of --with-local-libgcrypt) but not found or too old.
```
See: https://github.com/ntop/nDPI/actions/runs/8869020568/job/24350356867
```
ndpi_utils.c:69:10: fatal error: 'pcre2.h' file not found
^~~~~~~~~
1 error generated.
```
See: https://github.com/ntop/nDPI/actions/runs/8869020568/job/24349242251
Everything is still fine with macos-14 and no external dependencies
As workaround, test only macos-12 and macos-13 in our main matrix.
|
| |
|
| |
|
|
|
|
|
|
| |
P2P video player PPStream was discontinued shortly after the purchase of PPS.tv by Baidu (iQIYI) on 2013 (see https://www.techinasia.com/report-baidu-acquires-video-rival-pps)
So we remove the old `NDPI_PROTOCOL_PPSTREAM` logic and add `NDPI_PROTOCOL_IQIYI` id to handle all the iQIYI traffic, which is basically video streaming traffic.
A video hosting service, called PPS.tv, is still offered by the same company: for the time being we classified both services with the same protocol id.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
==22779==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7f0900701020 at pc 0x555bcd2a6f02 bp 0x7ffe3ba5e790 sp 0x7ffe3ba5e788
READ of size 1 at 0x7f0900701020 thread T0
#0 0x555bcd2a6f01 in shoco_decompress /home/ivan/svnrepos/nDPI/src/lib/third_party/src/shoco.c:184:26
#1 0x555bcd2a4018 in LLVMFuzzerTestOneInput /home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco.cpp:18:3
#2 0x555bcd1aa816 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) (/home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco+0x4f816) (BuildId: c54d1c32163c9937e06f62127348ae6bd26d9309)
#3 0x555bcd193be8 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) (/home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco+0x38be8) (BuildId: c54d1c32163c9937e06f62127348ae6bd26d9309)
#4 0x555bcd1996fa in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) (/home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco+0x3e6fa) (BuildId: c54d1c32163c9937e06f62127348ae6bd26d9309)
#5 0x555bcd1c3c92 in main (/home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco+0x68c92) (BuildId: c54d1c32163c9937e06f62127348ae6bd26d9309)
#6 0x7f090257a082 in __libc_start_main /build/glibc-e2p3jK/glibc-2.31/csu/../csu/libc-start.c:308:16
#7 0x555bcd18e96d in _start (/home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco+0x3396d) (BuildId: c54d1c32163c9937e06f62127348ae6bd26d9309)
Address 0x7f0900701020 is located in stack of thread T0 at offset 4128 in frame
#0 0x555bcd2a3d97 in LLVMFuzzerTestOneInput /home/ivan/svnrepos/nDPI/fuzz/fuzz_alg_shoco.cpp:5
This frame has 4 object(s):
[32, 4128) 'out' (line 9) <== Memory access at offset 4128 overflows this variable
[4256, 8352) 'orig' (line 9)
```
Found by oss-fuzzer.
See: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68211
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
==17==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000000546050 bp 0x7fff113c82a0 sp 0x7fff113c7a58 T0)
==17==The signal is caused by a READ memory access.
==17==Hint: address points to the zero page.
SCARINESS: 10 (null-deref)
#0 0x546050 in __sanitizer::internal_strlen(char const*) /src/llvm-project/compiler-rt/lib/sanitizer_common/sanitizer_libc.cpp:167:10
#1 0x4c6ba5 in __interceptor_strrchr /src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:740:5
#2 0x5fb9b9 in ndpi_get_host_domain_suffix /src/ndpi/src/lib/ndpi_domains.c:105:20
#3 0x578058 in LLVMFuzzerTestOneInput /src/ndpi/fuzz/fuzz_config.cpp:503:3
```
Found while fuzzing
|
| |
|
|
|
|
|
|
| |
```
nDPI/PcDebug64/src/include/ndpi_api.h:1970:3: error: function declaration isn’t a prototype [-Werror=strict-prototypes]
```
|
|
|
|
|
| |
Create the zip file with all the traces only once.
Add a new fuzzer to test "shoco" compression algorithm
|
| |
|
| |
|
|
|
|
|
| |
* Add KNXnet/IP protocol support
* Improve KNXnet/IP over TCP detection
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Added
size_t ndpi_compress_str(const char * in, size_t len, char * out, size_t bufsize);
size_t ndpi_decompress_str(const char * in, size_t len, char * out, size_t bufsize);
used to compress short strings such as domain names. This code is based on
https://github.com/Ed-von-Schleck/shoco
* Major code rewrite for ndpi_hash and ndpi_domain_classify
* Improvements to make sure custom categories are loaded and enabled
* Fixed string encoding
* Extended SalesForce/Cloudflare domains list
|
| |
|