Save Call Detail state on second leg of a calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



I'm probably over thinking this but would like to know what folks think about:


I have an array of identical Asterisk servers that are effectively
running a 'calling card' style application.  First leg inbound
to validate a bunch of things and if all pass, second leg is outbound
and 'billable'.

Custom AGI script in Perl with DBI connections via pgpool to a
centrally located postgresql database.

Works like a dream.  (And scales like a dream since 1U HP/IBM servers
are so damned cheap and draw less power).
(Maybe I'll convert Perl to C when the feature/flow has settled down,
but for now having it in Perl lets me enhance
things very quickly).

My call accounting strictly relies on Perl AGI custom code that
creates a CDR in the postgresql database at the end
of the call no matter which leg hangs up first.  If the 2nd leg
generates the hangup, the Perl script just continues past
the Dial exec and creates the CDR.  If the 1st leg generates the BYE,
the $SIG{HUP} = \&catchangup; log catches
it and calls the CDR routine.

About 1 in 5,000 calls I miss the CDR creation (and I'm not sure why).
 I know this because I create a record at the beginning of
the call as part of some fraud prevention/usage metics that hangs
around if the post call cleanup doesn't fire correctly.

While I can get great details about the 1st leg of the call from the
plain old CSV CDRs, I really need more details about
the second and outbound leg of the call (especially if it was answered).

I was thinking about using the filesystem to 'backup' cache active
calls.  Prior to connecting the outbound leg, create
a file on the local (and idle) filesystem with the unique name and put
some call details in it.  At the end of the outbound leg, update
this file with stats from the outbound leg PRIOR to attempting the
database updates.  If the database updates fire correctly
as they do 4,999 times out of 5,000, then delete the file.  Then I can
sweep through with an occasional cronjob to find the
leftover files and execute the SQL necessary to close out those calls.

Crazy?  It is pretty simple to do and often the ONLY think you can
trust on a Linux system is the creation/deletion/existence of a file
(assuming
that some transient network condition might exist to the database or
other Perl/exception processing that prevents the SQL calls from
firing.

Other ideas?  I also thought about having local postgresql instances
on each Asterisk server and turning on all CDR options to see what
I could fish from that.  But it is hard to beat simple flat files for
redundant logging.

--
_____________________________________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --
New to Asterisk? Join us for a live introductory webinar every Thurs:
               http://www.asterisk.org/hello

asterisk-users mailing list
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-users


[Gnu Gatekeeper]     [IETF Sipping]     [Info Cyrus]     [ALSA User]     [Fedora Linux Users]     [DCCP]     [Gimp]     [100% Free Online Dating]     [Yosemite News]     [Arts & Crafts]     [Yosemite Photos]     [Deep Creek Hot Springs]     [Yosemite Campsites]     [ISDN Cause Codes]


Add to Google Powered by Linux