http and -async 
Author Message
 http and -async

Don Libes said:

Quote:
> Has anyone modified the http package to support the -async flag to
> socket?  (I can't afford to temporarily halt the entire app while
> waiting for "socket" to complete.)
> Literally adding "-async" is not sufficient.  (I tried this of
> course.)  For example, http::Event is not prepared to fail when
> reading the http headers.  This looks easy to fix, but I'm worried
> about the code that writes the headers (in http::geturl) as well since
> there is no check for fblocked.
> I suspect I can modify the package myself but it looks tricky enough
> that I figured I'd ask first to see if someone has already done it.

To use async most easily, change this:

    if {[info exists phost] && [string length $phost]} {
        set srvurl $url
        set s [socket $phost $pport]
    } else {
        set s [socket $host $port]
    }
    set state(sock) $s

To

    if {[info exists phost] && [string length $phost]} {
        set srvurl $url
        set s [socket -async $phost $pport]
    } else {
        set s [socket -async $host $port]
    }
    set state(sock) $s

    fileevent $s writable [list ::http::ConnectDone $token]
    vwait $token\(connectDone)
    fileevent $s writable {}

This keeps the event loop active, but doesn't return control to the
caller of geturl.  To do that you need to split that procedure in
half.

proc http::ConnectDone {token} {
    variable $token
    upvar 0 $token state
    set state(connectDone) 1

Quote:
}
> As an aside, the documentation should be fixed.  Socket(n) says that
> fblocked should be called after each flush but fblocked(n) says that
> it only applies to input operations.  It sure would be helpful to know
> more about how puts, flush, and fblocked are supposed to be used on
> nonblocking sockets.

My understanding is that fblocked is used like "eof"
to test if a socket is blocked, but I find I rarely use it.
Instead, I just use fileevent to get callbacks.

If you are using gets in a read handler on a non-blocking socket,
it is possible that gets returns -1 because only a partial line
has been read.  You can use fblocked after gets to differentiate
this -1 return from the eof case.




Sun, 26 Mar 2000 03:00:00 GMT  
 http and -async


   Don Libes said:
   > Has anyone modified the http package to support the -async flag to
   > socket?  (I can't afford to temporarily halt the entire app while
   > waiting for "socket" to complete.)

   > Literally adding "-async" is not sufficient.  (I tried this of
   > course.)  For example, http::Event is not prepared to fail when
   > reading the http headers.  This looks easy to fix, but I'm worried
   > about the code that writes the headers (in http::geturl) as well since
   > there is no check for fblocked.

   > I suspect I can modify the package myself but it looks tricky enough
   > that I figured I'd ask first to see if someone has already done it.

   To use async most easily, change this:

       if {[info exists phost] && [string length $phost]} {
           set srvurl $url
           set s [socket $phost $pport]
       } else {
           set s [socket $host $port]
       }
       set state(sock) $s

   To

       if {[info exists phost] && [string length $phost]} {
           set srvurl $url
           set s [socket -async $phost $pport]
       } else {
           set s [socket -async $host $port]
       }
       set state(sock) $s

       fileevent $s writable [list ::http::ConnectDone $token]
       vwait $token\(connectDone)
       fileevent $s writable {}

Is the "fileevent writable" necessary?  The way I interpret the man
pages, I can write to the socket even before it's ready, call flush,
and the I/O subsystem will take care of flushing it (later) even if it
still isn't connected (now).  The man pages don't explicitly say this
but they seem to imply it (and it works that way in my testing -
although I've only tested it one platform so far).  So since geturl
calls flush already, it seems to me that there's no need to wait for a
writable event.

To finish the split (so geturl can return immediately), I modified
http::Event, essentially moving the whole thing in the catch that was
already there.  This catches any socket connection failures (at least
it has, so far...) that would occur while reading headers.  Plus, now
it will catch any other errors.

One other change I'd like to see in http is removal of the catch
around "eval $state(-command)".  As it stands now, this lumps user and
system errors together which I think is a bad thing.  Here's why I
think that:

Users can already catch their own errors (generated from eval
$state(-command)) by putting explicit catches in their "command" code.
If a "cannot happen" type error occurs (such as a syntax error during
development) this will just be caught by bgerror anyway.

In contrast, catching things like connection failures inside of Event
(and stuffing them into state(error)) makes sense.  There's no other
way for the user to handle this type of error.

As an example, we wanted to retry any geturls that failed.  Assuming
the URL is valid, there are still many reasons why the http background
procs can fail for transient reasons.  Catching any of these transient
errors is easy until you dump in any possible user error as well.
Then it's a mess and there's no easy way to decide which is which.
Our solution was to remove the catch as I said above.

Don



Mon, 27 Mar 2000 03:00:00 GMT  
 
 [ 2 post ] 

 Relevant Pages 

1. Has anyone modified http package to use -async?

2. XML schema, SOAP, HTTP Post & HTTP Get

3. tcl http with HTTP-EQUIV="Refresh"

4. http package: Basic HTTP Authorization?

5. HTTP 1.0 and HTTP 1.1 packages

6. http package does not conform HTTP/1.0

7. HTTP: http::geturl call and virtual memory increase in Nt4.0

8. VW2.5 async serial i/o

9. VA Async Queue has overflowed

10. Help on Symantec's Async/Visual Cafe

11. Async IO

12. Async messaging between .Apps

 

 
Powered by phpBB® Forum Software