Let's say you have table t(i int not null, a char(1))
insert into t select 1,'a'
thread A1 issues this:
begin transaction
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
select * from t where i=2
and then just sits there. Along comes thread B and issues
select * from t where i=2
update t set a='c' where i=2
The 1st of these B's statements will succeed, but the 2nd one will wait until thread A has issued commit. There's a problem however: it's easy to see that B can easily override the change made by A, if he acts basing on what he sees when he has retrieved the row. For that reason, it's better to issue
update t set a='c' where i=2 and a='a'
This effectively verifies that B is updating what he thinks he is updating. And this is exactly how client-based cursor is working, which is the most usual type of the cursor used in today's development tools, especially web-based: no transaction, no lock, but during the update verification of all the fields that were read, and if any has changed, then error message to the effect that another user has modified the resultset.
If you want to eliminate this, then you have to issue even more restrictive transaction isolation level SERIALIZABLE in the thread B, and as soon as you read the rows in A, update them there by using some dummy column just for this purpose. Then B won't be able to see the rows that may be modified by A - which is correct, because otherwise B wouldn't know that these rows are now being worked on by A. But this of course brings in the issue of A crashing, leaving for lunch, etc, etc.
For that reason, client-located cursor is probably the most practical way to do all this, which is why it's the most widespread type of database operations today.