Anyone with an A1000, Sol 2.6, and > 8 luns???

Anyone with an A1000, Sol 2.6, and > 8 luns???

Post by James Ler » Sun, 11 Jun 2000 04:00:00



Before we start, let me say this "I employed as a hardware repair guy."  Part of
my employment is being able to identify hardware from software problems.  After
finding a software problem, its nice to be able to give the Software support
group a good idea where the problem may be.  Thus, I really need to know the ins
and outs of the software side of life to give the best possible service.

That being said, Here is what I found my self in.

Cust has E4500 attached to A1000

A1000 had 8 disks in it configured as 8 luns (no idea why they did that)

Cust added an additional 4 disks to the A1000.

Cust tried to create additional luns (needs patch to make it work)

Cust installed patch, modified config files, ran some script, rebooted.

Cust is now unable to manage A1000 from RM6 (no luns show)

Cust is still able to mount the previous 8 luns and access data w/o any
problems. Cust just can't manage the array anymore.

At his point I am certain that there are no hardware failures, and I send the
ticket back to the storage group for resolution.  However I am unable to give
any advice to the cust or storage group as to where the software problem may
reside.  This is something I am normally really good at.

So I ask, are there any Gotchas out there to be aware of when trying to get more
than 8 luns on an A1000/A3x00 that is already configured with 8 luns?

I did a deja search and found a few posts relating to this topic, and 1 answer
was to completely reset the array by powering down array, removing all drives
but 1, remove the battery backup, wait a few minutes, install battery backup,
power up array, Fire up RM6, start re-isntalling drives, then re-configure luns,
restore data.  Obviously this sounds a little extreme.

Thanks for any help
James

 
 
 

Anyone with an A1000, Sol 2.6, and > 8 luns???

Post by fishi.. » Mon, 12 Jun 2000 04:00:00




Quote:> I did a deja search and found a few posts relating to this topic, and
1 answer
> was to completely reset the array by powering down array, removing
all drives
> but 1, remove the battery backup, wait a few minutes, install battery
backup,
> power up array, Fire up RM6, start re-isntalling drives, then re-
configure luns,
> restore data.  Obviously this sounds a little extreme.

> Thanks for any help
> James

James,

Chances are you found my post regarding resetting an A1000/A3X00 when
it gets hosed up in RM6 as you described.  I had the exact same thing
happen as you described. Personally I have never succesfully created
more then 8 LUNS on an A1000. But I can tell you I've tried and got the
same results as your customer.  I've also done the same kind of thing
to an A3500 (ok so I missed the fine print that says you can't have
more than 8 LUNS on a PCI system...). The array becomes completely
unresponsive to RM6. If you try running a healthcheck from the commmand
line it comes back and tells you there is some kind of error with the
array.  I talked to Sun about this and the only answer they could come
up with is to reset the controllers. Like you I found the procedure
very extreme. Sun could give me no other solution.  Luckily for me
there was no data on the arrays yet.  If you come up with a solution
that works I would really like to hear it.

--
John

Actual Sun Bug:
4110503 as_setprot heuristic gave my process a wedgie

Sent via Deja.com http://www.deja.com/
Before you buy.

 
 
 

Anyone with an A1000, Sol 2.6, and > 8 luns???

Post by unix.. » Tue, 13 Jun 2000 04:00:00


Hi,
  I've installed and configured a few A1000 arrays, and they've all been
configured with > 8 LUNs.  You mentioned a patch that the customer
installed, but did they also run the script /usr/lib/osa/bin/genscsiconf
to add the additional sd.conf entries, and /usr/lib/osa/bin/hot_add to
allow the Raid Manager to see the new LUNs?

  This also may or may not help, but in instances where I could see
the LUNs through command line (/etc/raid/bin/lad), but not through the
Raid Manager GUI, I've had to check to make sure there wasn't a lock in
the /etc/raid/locks directory for the array I couldn't access, or for a
particular LUN I couldn't see.  Usuallly, removing the lock corrected
the problem.  I don't remember the exact naming for the lock file, but I
believe it may be called lunlocks, or something like that.

Thanks,
Carol

Sent via Deja.com http://www.deja.com/
Before you buy.

 
 
 

Anyone with an A1000, Sol 2.6, and > 8 luns???

Post by James Ler » Wed, 14 Jun 2000 04:00:00


Upon review from the Software guy on site, it was the lock files that were
causing the issue.  

Thank you, again I learn something new ;)

James


>Hi,
>  I've installed and configured a few A1000 arrays, and they've all been
>configured with > 8 LUNs.  You mentioned a patch that the customer
>installed, but did they also run the script /usr/lib/osa/bin/genscsiconf
>to add the additional sd.conf entries, and /usr/lib/osa/bin/hot_add to
>allow the Raid Manager to see the new LUNs?

>  This also may or may not help, but in instances where I could see
>the LUNs through command line (/etc/raid/bin/lad), but not through the
>Raid Manager GUI, I've had to check to make sure there wasn't a lock in
>the /etc/raid/locks directory for the array I couldn't access, or for a
>particular LUN I couldn't see.  Usuallly, removing the lock corrected
>the problem.  I don't remember the exact naming for the lock file, but I
>believe it may be called lunlocks, or something like that.

>Thanks,
>Carol

>Sent via Deja.com http://www.deja.com/
>Before you buy.

 
 
 

1. Has anyone compiled mpack on Sol 2.6 Sparc

I need to  send files as attachments via cron, and I was told that mpack
was the way to go.    I downloaded the source for 1.5, but it just wont
compile..it looks like theres a problem with getopt.c...heres the output
of make:

# make
gcc -O   -c  unixpk.c
gcc -O   -c  encode.c
gcc -O   -c  codes.c
gcc -O   -c  magic.c
gcc -O   -c  unixos.c
gcc -O   -c  string.c
gcc -O   -c  xmalloc.c
gcc -O   -c  md5c.c
gcc -O   -c  getopt.c
getopt.c: In function `getopt':
getopt.c:67: argument `av' doesn't match prototype
/usr/include/stdio.h:329: prototype declaration
getopt.c:67: argument `opts' doesn't match prototype
/usr/include/stdio.h:329: prototype declaration
*** Error code 1
make: Fatal error: Command failed for target `getopt.o'

I know that people are using it on 2.6, and I'm probably missing
something simple, but I dont see it...has anyone had luck with it?

a.r.

2. Solaris/SVR4 port of pty

3. qfull-retries/qfull-retry-interval in Sol 2.5.1 and Sol 2.6

4. Partitioning problem

5. Upgrade Sol 2.5.1 to Sol 2.6

6. realproducer commandline question

7. Sol 2.5.1 and Sol 2.6 compatiblity

8. Ipchains Problems

9. sol 2.6 writing >2g files over NFS

10. From Sol 2.5.1 -> 2.6 w/ vxva 2.5

11. Upgrading Solaris 2.6 --- -> Sol 7 (with metadevices)

12. 2.6 FCS -> 2.6 5/98 upgrade fails because /usr moved to /usr:2.6

13. A1000 - Largest single LUN and Most Space Availabilty