Responses below for Adam, just as an add-on to Kevin's.
After writing this I noticed other posts were made to the thread,
sorry if the comments are redundant.
>> I never ever touch the peqs file with a
>> basic program directly because I don't think it likes it to much.
>Not really sure what you mean by this. In what way does the PEQS file
>not like being "touched" by a BASIC program.
I play with peqs all the time. Never seems fragile to me.
>> For example, It would be nice if SP-EDIT had a (for use with the 'f'
>> to file parameter) a way to choose a specific page or page range.
>AFAIK, It doesn't.
No matter how you slice it, you're going to have to generate data and
then parse through it for delimiters. Either the system does it or
you do it with BASIC. So even if there were another option on sp-edit
the same work is still being done.
This "let someone else do it" concept is called the NIMBY principle,
for "Not In My Back Yard". It's usually applied to putting prisons or
power plants around residential areas, (anyone play SimCity?) but it
applies a lot to code too - just because we delegate a chore to some
other process doesn't mean it'll be any more performant or attractive
a solution. But I digress, and then some.
Hmm, now that I think about this, if you really want to get creative,
you can use a shared printer rather than a hold file, send the output
through a filter which parses the data and leaves it in the native
file system. At least this will be done on the fly and not live. You
can then grab the pages you need with no parsing at all.
>> Now I have to first make a call to d3, file all the pages to disk
>> ...but now when a user kills his browser
>> and forgets about this task the hard disk space is still being stolen
>> by his task.
>Welcome to stateless programming.
Agreed. The real question is, why are you generating (potentially)
1000-2000 pages of data for a user who may not stick around for the
results? A better solution is first to use indexing and generate only
the data that the user really wants. Second, give the user some form
of entertainment while the report is being generated so that they
don't go away. For a web interface you can do this with polling
frames (use meta refresh tag), shockwave/flash, java, or some other
interface more intelligent than plain HTML.
Another possible solution (depending on how diverse your needs are) is
to generate the x-thousand page document once and then somehow index
the report itself so that user queries will allow you to point
directly into the report. This is compared to generating a new report
for every request. This creates a near-line state engine (neither
on-line nor off-line) which you may want to offload to another
query-only system. I don't like this particular idea but there are
many ways to skin these cats.
If you have impatient users, you'll absolutely need to generate your
reports asynchronously anyway, lest someone hang all your processes
with dead requests.
>> Using pick as a back end is a pain-in-da-bu**. Flame Away.
>Like any database, it has its strengths and weaknesses.
I don't see this as being a weakness, bad planning for user
requirements and poor design are going to be a pain no matter what
tools you use. Adam, you said yourself:
Quote:> It'll save me some logic in the front end, cleaning up after
> users, keeping track of things and stuff.
If you're using Pick as a back-end then technically it's not
responsible for the user interface. Make up your mind (no offense
intended), but the functions have to be written "somewhere".