We have come across a quirk with a Cisco SPA525G2 handset our SIP trunk and Grandstream.
We use the Grandstream to convert VOIP to PSTN for emergency failover (client is a care home).
The issue we have is the site lost internet connectivity a couple of days ago and once everything came back online everything seemed fine apart from the fact the cisco device (the rest are Yealinks) routes the outbound calls over the Grandstream.
So far, we have factory reset the handset, Set the phone up as a completely new extension, disabled the grandstream route (this caused calls to fail on the cisco device). Reprovisioned the phone, rebooted it multiple times, flushed the cache in the sip status, reloaded the ACL and XML, even flushed registrations, restarted the proxmox server and the grandstream once more.
Feel a little bit like banging our head against the wall with this one unless it is just a handset fault but strange to appear after network loss.
2020-01-30 14:22:34.448088 [DEBUG] freeswitch_lua.cpp:401 DBH handle 0x7f6adc075d10 released.
2020-01-30 14:22:34.448088 [DEBUG] freeswitch_lua.cpp:372 DBH handle 0x7f6adc075d10 Connected.
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/300@1**.1**.**.**) State DESTROY going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:181 sofia/internal/300@1**.1**.**.** Standard DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] mod_sofia.c:354 sofia/internal/300@1**.1**.**.** SOFIA DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/300@1**.1**.**.**) State DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:741 (sofia/internal/300@1**.1**.**.**) Running State Change CS_DESTROY (Cur 2 Tot 56)
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [NOTICE] switch_core_session.c:1735 Close Channel sofia/internal/300@1**.1**.**.** [CS_DESTROY]
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [NOTICE] switch_core_session.c:1731 Session 51 (sofia/internal/300@1**.1**.**.**) Ended
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_session.c:1713 Session 51 (sofia/internal/300@1**.1**.**.**) Locked, Waiting on external entities
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:610 (sofia/internal/300@1**.1**.**.**) State Change CS_REPORTING -> CS_DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:938 (sofia/internal/300@1**.1**.**.**) State REPORTING going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:174 sofia/internal/300@1**.1**.**.** Standard REPORTING, cause: NORMAL_CLEARING
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:938 (sofia/internal/300@1**.1**.**.**) State REPORTING
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:584 (sofia/internal/300@1**.1**.**.**) Running State Change CS_REPORTING (Cur 3 Tot 56)
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/01910191******70) State DESTROY going to sleep
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:181 sofia/internal/01910191******70 Standard DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:619 (sofia/internal/300@1**.1**.**.**) State Change CS_HANGUP -> CS_REPORTING
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] mod_sofia.c:354 sofia/internal/01912704370 SOFIA DESTROY
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/01910191****7070) State DESTROY
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:741 (sofia/internal/0191****70) Running State Change CS_DESTROY (Cur 3 Tot 56)
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:852 (sofia/internal/300@1**.1**.**.**) State HANGUP going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:60 sofia/internal/300@1**.1**.**.** Standard HANGUP, cause: NORMAL_CLEARING
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [NOTICE] switch_core_session.c:1735 Close Channel sofia/internal/0191****70 [CS_DESTROY]
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [NOTICE] switch_core_session.c:1731 Session 54 (sofia/internal/01910191******70) Ended
We use the Grandstream to convert VOIP to PSTN for emergency failover (client is a care home).
The issue we have is the site lost internet connectivity a couple of days ago and once everything came back online everything seemed fine apart from the fact the cisco device (the rest are Yealinks) routes the outbound calls over the Grandstream.
So far, we have factory reset the handset, Set the phone up as a completely new extension, disabled the grandstream route (this caused calls to fail on the cisco device). Reprovisioned the phone, rebooted it multiple times, flushed the cache in the sip status, reloaded the ACL and XML, even flushed registrations, restarted the proxmox server and the grandstream once more.
Feel a little bit like banging our head against the wall with this one unless it is just a handset fault but strange to appear after network loss.
2020-01-30 14:22:34.448088 [DEBUG] freeswitch_lua.cpp:401 DBH handle 0x7f6adc075d10 released.
2020-01-30 14:22:34.448088 [DEBUG] freeswitch_lua.cpp:372 DBH handle 0x7f6adc075d10 Connected.
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/300@1**.1**.**.**) State DESTROY going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:181 sofia/internal/300@1**.1**.**.** Standard DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] mod_sofia.c:354 sofia/internal/300@1**.1**.**.** SOFIA DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/300@1**.1**.**.**) State DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:741 (sofia/internal/300@1**.1**.**.**) Running State Change CS_DESTROY (Cur 2 Tot 56)
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [NOTICE] switch_core_session.c:1735 Close Channel sofia/internal/300@1**.1**.**.** [CS_DESTROY]
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [NOTICE] switch_core_session.c:1731 Session 51 (sofia/internal/300@1**.1**.**.**) Ended
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_session.c:1713 Session 51 (sofia/internal/300@1**.1**.**.**) Locked, Waiting on external entities
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:610 (sofia/internal/300@1**.1**.**.**) State Change CS_REPORTING -> CS_DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:938 (sofia/internal/300@1**.1**.**.**) State REPORTING going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.468055 [DEBUG] switch_core_state_machine.c:174 sofia/internal/300@1**.1**.**.** Standard REPORTING, cause: NORMAL_CLEARING
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:938 (sofia/internal/300@1**.1**.**.**) State REPORTING
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:584 (sofia/internal/300@1**.1**.**.**) Running State Change CS_REPORTING (Cur 3 Tot 56)
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/01910191******70) State DESTROY going to sleep
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:181 sofia/internal/01910191******70 Standard DESTROY
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:619 (sofia/internal/300@1**.1**.**.**) State Change CS_HANGUP -> CS_REPORTING
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] mod_sofia.c:354 sofia/internal/01912704370 SOFIA DESTROY
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:751 (sofia/internal/01910191****7070) State DESTROY
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:741 (sofia/internal/0191****70) Running State Change CS_DESTROY (Cur 3 Tot 56)
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:852 (sofia/internal/300@1**.1**.**.**) State HANGUP going to sleep
b0fb9eb9-b302-4982-91c4-263e6723b5fa 2020-01-30 14:21:52.428288 [DEBUG] switch_core_state_machine.c:60 sofia/internal/300@1**.1**.**.** Standard HANGUP, cause: NORMAL_CLEARING
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [NOTICE] switch_core_session.c:1735 Close Channel sofia/internal/0191****70 [CS_DESTROY]
6798caf7-49bb-4748-8c75-ac73942f6d99 2020-01-30 14:21:52.428288 [NOTICE] switch_core_session.c:1731 Session 54 (sofia/internal/01910191******70) Ended
Last edited: