Calvin Morrison <mutantturkey_AT_gmail.com> writes:
> On 22 February 2012 09:17, Troels Henriksen <athas_AT_sigkill.dk> wrote:
>> Calvin Morrison <mutantturkey_AT_gmail.com> writes:
>>
>>> On 22 February 2012 08:36, Troels Henriksen <athas_AT_sigkill.dk> wrote:
>>>> Calvin Morrison <mutantturkey_AT_gmail.com> writes:
>>>>
>>>>> But, since we write out to the cookie jar frequently, wouldn't it be
>>>>> inefficient to be constantly re reading (and reparsing) the entire
>>>>> cookie file?
>>>>
>>>> Yes, but we're already doing that, so apparently it's not a big problem
>>>> in practice.
>>>
>>> I was clear that we were constantly writing to the cookie jar. I was
>>> not clear that we were manually rereading it each time, now i see that
>>> the getcookies function is doing it. That seems rather inefficient
>>> also because on every request signal emitted we are reloading the
>>> cookie jar. That means it's not only every time we reload a page, but
>>> whenever the page requests new info (eg facebook while scrolling
>>> through the feed)
>>>
>>> What is a better solution, or is that the best solution?
>>
>> A better solution would be to cache the cookie file contents and only
>> re-read if something has changed. The "best" solution, with respect to
>> efficiency, would be to have some central cookie daemon that can send
>> the changed cookies to the running Surf instances. Uzbl uses this last
>> approach, but I think it's far too complicated.
>
> What about inotify? I am now thinking that would be the best. Instead
> of loading the file on every request, we could just check for inotify
> events. We can create an event for the cookie file using IN_MODIFY
> event. if there has been an event, then we reread.
Yes, inotify would be the mechanism by which file changes are
discovered, although it's not portable. I'm not certain what the
Suckless Zeitgeist is on using Linux-only facilities.
--
\ Troels
/\ Henriksen
Received on Wed Feb 22 2012 - 16:35:17 CET