You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have observed this issue being very consistently encountered, but does not seem to have a set of triggers/conditions to get the error to occur.
I am running version 3.1.1, directly checked out from GitHub repo, with a single NeoFire (also saw with NeoRED8) attached via RJ-45 ethernet directly between device and a Linux Workstation running PopOS(Ubuntu) 22.04.
Issue seems to occur after some period of time (never a fixed amount), and could be triggered by a certain number of promiscuous network device scans.
Message received:
terminate called after throwing an instance of 'std::length_error'
what(): cannot create std::vector larger than max_size()`
Core dump from exception:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f7d6b71b7ed in __GI___select (nfds=4, readfds=0x7ffccd1fc730, writefds=0x0, exceptfds=0x0, timeout=0x7ffccd1fc6c0) at ../sysdeps/unix/sysv/linux/select.c:69
69 ../sysdeps/unix/sysv/linux/select.c: No such file or directory.
(gdb) c
Continuing.
Thread 2 "libicsneo-socke" received signal SIGABRT, Aborted.
[Switching to Thread 0x7f7d6af96640 (LWP 2706892)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140176642369088) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140176642369088) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140176642369088) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140176642369088, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007f7d6b642476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007f7d6b6287f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007f7d6baa2bbe in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007f7d6baae24c in ?? () from /lib/x86_64-linux-gnu/li
[core.zip](https://github.com/intrepidcs/icsscand/files/12475850/core.zip)
bstdc++.so.6
#7 0x00007f7d6baae2b7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#8 0x00007f7d6baae518 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#9 0x00007f7d6baa548f in std::__throw_length_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x000056437df3318a in icsneo::EthernetPacketizer::EthernetPacket::loadBytestream(std::vector<unsigned char, std::allocator<unsigned char> > const&) ()
#11 0x000056437df331e0 in icsneo::EthernetPacketizer::EthernetPacket::EthernetPacket(std::vector<unsigned char, std::allocator<unsigned char> > const&) ()
#12 0x000056437df3322f in icsneo::EthernetPacketizer::inputUp(std::vector<unsigned char, std::allocator<unsigned char> >) ()
#13 0x000056437df1ac6a in icsneo::PCAP::Find(std::vector<icsneo::FoundDevice, std::allocator<icsneo::FoundDevice> >&) ()
#14 0x000056437dee45d2 in icsneo::DeviceFinder::FindAll() ()
#15 0x000056437ded1f82 in icsneo::FindAllDevices() ()
#16 0x000056437decd10d in searchForDevices() ()
#17 0x000056437decfe29 in deviceSearchThread() ()
#18 0x00007f7d6badc2b3 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#19 0x00007f7d6b694b43 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#20 0x00007f7d6b726a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
(gdb) gcore core.libicsneo-socket
warning: Memory read failed for corefile section, 4096 bytes at 0xffffffffff600000.
Saved corefile core.libicsneo-socket
(gdb) inf thr
Id Target Id Frame
1 Thread 0x7f7d6b82f800 (LWP 2706891) "libicsneo-socke" 0x00007f7d6b71b7ed in __GI___select (nfds=4, readfds=0x7ffccd1fc730, writefds=0x0, exceptfds=0x0, timeout=0x7ffccd1fc6c0)
at ../sysdeps/unix/sysv/linux/select.c:69
* 2 Thread 0x7f7d6af96640 (LWP 2706892) "libicsneo-socke" __pthread_kill_implementation (no_tid=0, signo=6, threadid=140176642369088) at ./nptl/pthread_kill.c:44
3 Thread 0x7f7d6a795640 (LWP 2706910) "libicsneo-socke" 0x00007f7d6b718d7f in __GI___poll (fds=0x7f7d6a794d60, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
4 Thread 0x7f7d69f94640 (LWP 2706911) "libicsneo-socke" 0x00007f7d6b718d7f in __GI___poll (fds=0x7f7d5c000b70, nfds=1, timeout=200) at ../sysdeps/unix/sysv/linux/poll.c:29
5 Thread 0x7f7d69793640 (LWP 2706912) "libicsneo-socke" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7f7d64004760)
at ./nptl/futex-internal.c:57
6 Thread 0x7f7d68d92640 (LWP 2706913) "libicsneo-socke" 0x00007f7d6b718d7f in __GI___poll (fds=0x7f7d68d91c60, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
7 Thread 0x7f7d63fff640 (LWP 2706914) "libicsneo-socke" __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x7f7d63ffed20, op=137, expected=0,
futex_word=0x7f7d640232a8) at ./nptl/futex-internal.c:57
8 Thread 0x7f7d637fe640 (LWP 2706915) "libicsneo-socke" __futex_abstimed_wait_common64 (private=<optimized out>, cancel=true, abstime=0x7f7d637fdce0, op=137, expected=0,
futex_word=0x7f7d640240c8) at ./nptl/futex-internal.c:57
9 Thread 0x7f7d62ffd640 (LWP 2706916) "libicsneo-socke" 0x00007f7d6b6e5868 in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0, req=0x7f7d62ffccd0,
rem=0x7f7d62ffccd0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
(gdb)
The text was updated successfully, but these errors were encountered:
Have observed this issue being very consistently encountered, but does not seem to have a set of triggers/conditions to get the error to occur.
I am running version 3.1.1, directly checked out from GitHub repo, with a single NeoFire (also saw with NeoRED8) attached via RJ-45 ethernet directly between device and a Linux Workstation running PopOS(Ubuntu) 22.04.
Issue seems to occur after some period of time (never a fixed amount), and could be triggered by a certain number of promiscuous network device scans.
Message received:
Core dump from exception:
The text was updated successfully, but these errors were encountered: