[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Database size problem
On Fri, 23 Jun 2000, Cèsar Ordiñana wrote:
> El Fri, 23 de Jun de 2000, a les 03:06:37PM +0200, Falko Braeutigam va escriure:
> > On Fri, 23 Jun 2000, Cèsar Ordiñana wrote:
> > > Hello everybody,
> > >
> > > I've done a little program to test ozone, by creating, reading and deleting
> > > an incremental number of objects, with incremental size. The basic algorithm
> > > is:
> >
> > First of all, I'm afraid your create/read/destroy methods are not methods of
> > a database object and so they run outside the server, right? This results in
> > very poor performance, _and_ causes transaction problems. Although all ozone
> > samples promote another design for ozone applications I see this again and
> > again. Well, it's probably my failure because ozone still lacks documentation
> > but on the other hand I'm not sure if written docu is better than sample code.
>
> Well, I've re-readen all the documentation an the web, and I think I
> understand you now. It's the problem of changing from relational to OO
> databases.
>
> Because of this, I think my test is useless now. An ozone database will
> contain relatively complex and big objects with some kind of relation between
> them (the data from an application), not a big amount of little unrelated ones.
> At least it served to me to undertand things.
Your test is not completely useless but it is not a good example for an
real-world use case ;)
>
> > > It works as expected if I put the open/close connection into the first loop,
> > > like this:
> > >
> > > for size = 10,100,1000,10000 do
> > >
> > > open connection to local ozone database (using LocalDatabase)
> > >
> > > for objects_number = 10,100,1000 do
> > > create(objects_number, size)
> > > read(objects_number)
> > > destroy(objects_number)
> > > end for
> > >
> > > close connection
> > >
> > > end for
> > >
> > >
> > > But I think it isn't the correct solution.
> > Unfortunately you didn't send the complete code so I have to write my own to
> > check this ;) Anyway, I will do so. Maybe it's a good canditate for a new
> > test or sample.
>
> The problem I was seeing here is when you delete an object (a lot of ones
> in this case) from the database, it isn't really deleted from the database
> files until the connection is closed. That's correct?
No, that isn't a correct behaviour. But I don't see how a connection close
could influence the cluster files. The one and only operation that actually
changes the database files is a transaction commit/abort. This is done by the
client or at last when the connection is closed... (!) Did you commit/abort all
transactions properly before closing the connection?
> > > If transactions that involve a change in the objects state, driven by the
> > > interface marked methods (with /*update*/), how can I manage transactions
> > > involving the creation and destruction of objects?.
> > The only way to create and destroy database objects are the API methods
> > createObject() and deleteObject(). These methods implicitely have WRITE lock
> > level. Does this answer your question?
>
> Yeah! I've seen the light ;)
Great!
Falko
--
______________________________________________________________________
Falko Braeutigam mailto:falko@softwarebuero.de
softwarebuero m&b (SMB) http://www.softwarebuero.de