Wednesday, January 23, 2019

Script to setup Password less ssh between hosts.

Following Python script will setup password less SSH between hosts in Solaris 11.

Hope the Python pkgs are already installed in Soalris 11.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to run the script:

# ./setupPasswordlessSSH.py --help
usage: setupPasswordlessSSH.py [-h] [-u USER] [-p PASSWORD] -c CLIENT
                               [CLIENT ...]

optional arguments:
  -h, --help            show this help message and exit
  -u USER, --user USER  Username[Optional] (Default : root)
  -p PASSWORD, --password PASSWORD
                        Password[Optional] (Default : welcome1)
  -c CLIENT [CLIENT ...], --client CLIENT [CLIENT ...]
                        Host or List of Hosts separated by space.
                     
Ex:  ./setupPasswordlessSSH.py -u [<USER>] -p [<your password>] -c Host1 Host2 .. HostN

Note: If you don't specify, default user it takes as 'root'. Also you can update the script and replace 'PASSWORD'  in Help section with your actual password, then no need to use -p option for password.

setupPasswordlessSSH.py [Copy between the lines]
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#!/usr/bin/python
import os
from getpass import getpass
import argparse

##Check and Install Python Lib Pkgs required for this script to run
def inst_pkgs():
   retcde=os.system('pkg list -q library/python/paramiko')
   if retcde!=0:
        print 'Packages "paramiko,paramiko-27" are not found'
        print 'Its required for this script to work. Installing it now....\n'
        os.system('pkg install paramiko')

inst_pkgs()
import paramiko

##Function
def deploy_key(key, server, username, password):
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    client.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
    client.connect(server, username=username, password=password)
    retcode=os.system('ssh -l %s -oNumberOfPasswordPrompts=0 %s "echo hello" >/dev/null 2>&1' % (username, server))
    if retcode==0:
        print "Password-less SSH is already enabled for:", server
        return
    else:
        client.connect(server, username=username, password=password)
        client.exec_command('mkdir -p ~/.ssh/')
        client.exec_command('echo "%s" >> ~/.ssh/authorized_keys' % key)
        client.exec_command('chmod 644 ~/.ssh/authorized_keys')
        client.exec_command('chmod 700 ~/.ssh/')
        print "Password-less SSH has been setup with:", server

##Help options
parser = argparse.ArgumentParser()
parser.add_argument('-u', '--user', default='root', help="Username[Optional] (Default : root)")
parser.add_argument('-p', '--password', default='PASSWORD', help="Password[Optional] (Default : PASSWORD)")
parser.add_argument('-c', '--client', nargs='+', type=str, default=[], required=True, help="Host or List of Hosts separated by space. Ex: ./<script> -c Host1 Host2 ")

args = parser.parse_args()
## Variables
username = args.user
hosts = args.client
password = args.password
keyfile=os.path.join(os.environ['HOME'] + "/.ssh/id_rsa")

print "User:", username
print "Hostnames:", hosts
print "Password:", password

if not os.path.isfile(keyfile):
    print "RSA is missing, Generating Keypair", keyfile
    os.system('ssh-keygen -t rsa -f %s -q -N "" ' %(keyfile))

os.system('touch ~/.ssh/known_hosts')
key = open(os.path.expanduser('~/.ssh/id_rsa.pub')).read()

for host in hosts:
    deploy_key(key, host, username, password)

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Friday, July 15, 2016

DB 12c In-Memory - Find what SQL queries (SQL ID's) are really using In-Memory.

I was recently working with Database 12c In-Memory feature and interested to find all the SQL queries, SQL ID which have used In-Memory scans.

I wrote a small PL/SQL script which can quickly give a list of all SQL_ID and Plan_Hash_Value and also the Object Names which used In-Memory Full Scan.

You can try this below script. You just have to enter (past time,within awr retention period) begin time & end time in (24 Hours format ): DD/MM/YY HH:MI  and schema name, which you are interested.

------------ copy from here ------------

set pages 2000 lines 200
SET FEEDBACK OFF
set serveroutput on format wrapped

BEGIN
  dbms_output.put_line('Input Time Format(24 Hours): DD/MM/YY HH:MI');
 END;
/

DECLARE
output number;

BEGIN
   FOR v IN (SELECT /*+ parallel(d,4) */ distinct
    sql_id, plan_hash_value as plan_value
   FROM
 dba_hist_sql_plan d
   WHERE
   object_owner='&SchemaName' and
   to_char(TIMESTAMP,'DD/MM/YY HH24:MI') between '&Begin_Time' and '&End_Time'
   )
   LOOP
  output := 0;
  FOR c_val IN (
    WITH plan_values AS (
      select P.PLAN_TABLE_OUTPUT as plan_tab from (select * from table(dbms_xplan.display_awr(v.sql_id, v.plan_value))) P where P.PLAN_TABLE_OUTPUT like '%TABLE ACCESS INMEMORY FULL%'
    )
    SELECT
      S.*
      FROM
      plan_values S
  )
  LOOP
  DBMS_OUTPUT.PUT_LINE(c_val.plan_tab);
  IF c_val.plan_tab like '%INMEMORY%'
  then
  output := 1;
  END IF;

  END LOOP;
  IF output = 1
  THEN
        dbms_output.new_line;
        DBMS_OUTPUT.PUT_LINE('SQLID : '||v.sql_id||'    Plan_Hash_Value : '||v.plan_value);
        dbms_output.new_line;
  END IF;
   END LOOP;
END;
/


---------------------------------------------------------------------------------------------------------------------

Tags : 
db 12c in memory find sql ids
db 12c in-memory find sql queries
db 12c in-memory list all sql
oracle database 12c monitoring

Monitor and Analyze your Oracle Database (11g, 12c)

These are the very useful scripts, that I often use it. this can be very handy.
[ For RAC, you may have to tweak it a bit, using gv$ views for some scripts]

1. Monitor Database, what top SQL's are currently running in your DB instance. 

(Get sqlid, sql text, serial, elapsed time, plan_hash_value etc )  This can be a good start  :)

set lines 200 pages 200 
col USERNAME format a15
col SQL_TEXT format a30
select sesion.sid,
       sesion.SERIAL#,
       sesion.username,
       optimizer_mode,
       plan_hash_value,
       address,
       cpu_time,
       elapsed_time/1000/1000 elapsed_Second,
           sqlarea.SQL_ID,
       substr(sql_text,1,30) sql_text
  from v$sqlarea sqlarea, v$session sesion
 where sesion.sql_hash_value = sqlarea.hash_value
   and sesion.sql_address    = sqlarea.address
   and sesion.username is not null
   and rownum <= 20
   order by elapsed_time desc
   ;

Sample Output :



2.  Show SQL Execution Plan for any SQL  ID


Ok,  Now I know what sql's are running currently, I am interested in checking the execution plan of it. You can get the execution plan from cursor. If not available, then you can get it from AWR.

Cursor :
select * from table(dbms_xplan.display_cursor('&sql_id',0));

AWR:
select plan_table_output from table (dbms_xplan.display_awr('&sqlid'));

From AWR it gives all previous execution plans, which are used for a that SQL ID 
Suppose, I need to get a execution plan for sql_id with with specific plan_hash_value only, then I would do.

SQL_ID & Plan_Hash_Value:
select plan_table_output from table (dbms_xplan.display_awr('&sql_id','&PlanHash')); 

Get bind values for SQL query along with execution plan

select * from table(dbms_xplan.display_cursor('&sqlid',NULL,'peeked_binds'))


3.  SQL ID execution history, previous execution and elapsed time, what execution plan it used. 


Now I am also interested to see, all previous history when this SQL was executed, how may times executed and what execution plan it used that time.  Now this is very interesting and very useful. 

set lines 200 pages 200
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
col sql_profile format a30
break on plan_hash_value on startup_time skip 1
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode (nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode (nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,sql_profile
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id = nvl('&sql_id','XXXXXXXXX')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1, 2, 3
/


Output :


If you see above output, it will give you, Snapshot ID,  Node, execution time, Plan details (if it used a different execution plan at different time), number of execution, avg execution time, AVG_LIO (Logical IO or Buffer Gets), also whether it is using any SQL profile that you have created.  Pretty cool script !!

4. Find Top wait events in last 5 minutes.


set lines 200 pages 200
col WAIT_CLASS format a30
select * from (Select distinct a.event, a.total_waits, a.time_waited, a.average_wait, c.wait_class
  From v$system_event a, v$event_name b, v$system_wait_class c, v$active_session_history d
  Where a.event_id=b.event_id
  And a.event_id=d.event_id
  And b.wait_class#=c.wait_class#
  And c.wait_class in ('CPU','CPU Wait','Application','Concurrency','Cluster','Other','Network','Configuration','Commit','User I/O','System I/O')
  And d.sample_time > sysdate - ( 5 / ( 24 * 60 ) )
  order by average_wait desc)
  where rownum <= 15;
-- Waits during last 5 Minutes
SELECT event Event_LAST_5Min,
       COUNT(*)
FROM   v$active_session_history
WHERE  session_state = 'WAITING'
       AND sample_time > sysdate - ( 5 / ( 24 * 60 ) )
GROUP  BY event
ORDER  BY COUNT(*) DESC;


5.  Find SQL ID or SQL Query waiting on wait event


select event, sql_id, count(*),
avg(time_waited) avg_time_waited
from v$active_session_history
where event like nvl('&event','%more data from%')
group by event, sql_id
order by event, 3
/
 

6.  Find database size 


col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/
 

7. Find all Tablespace Size and usage 


SELECT /* + RULE */  df.tablespace_name "Tablespace",
       df.bytes / (1024 * 1024) "Size (MB)",
       SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
       Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
       Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
  FROM dba_free_space fs,
       (SELECT tablespace_name,SUM(bytes) bytes
          FROM dba_data_files
         GROUP BY tablespace_name) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
       fs.bytes / (1024 * 1024),
       SUM(df.bytes_free) / (1024 * 1024),
       Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
       Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
  FROM dba_temp_files fs,
       (SELECT tablespace_name,bytes_free,bytes_used
          FROM v$temp_space_header
         GROUP BY tablespace_name,bytes_free,bytes_used) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
 ORDER BY 4 DESC;

 

8.  Find all indexes for a table, with details


break on index_name on uniqueness skip 1
column column_expression format a20
col COLUMN_NAME format a40
col INDEX_NAME format a30
select i.index_name, i.uniqueness, c.column_name, f.column_expression,i.status,i.visibility,i.last_analyzed
from all_ind_columns c, all_indexes i, all_IND_EXPRESSIONS f
where i.table_owner = '&OWNER'
and i.table_name = '&TABLE_NAME'
and i.index_name = c.index_name
and i.owner = c.indeX_owner
and c.index_owner = f.index_owner(+)
and c.index_name = f.index_name(+)
and c.table_owner = f.table_owner(+)
and c.table_name = f.table_name(+)
and c.column_position = f.column_position(+)
order by i.index_name, c.column_position
/


9.  Fetch Table definition


select DBMS_METADATA.GET_DDL('TABLE','&TABLE_name') from DUAL;

10.  Fetch Tablespace definition


select
     dbms_metadata.get_ddl('TABLESPACE',&tablespace_name)
from
     dba_tablespaces
;




I end here, with these 10 most useful scripts, I will share some more scripts in my next blogs. 

 


Thursday, May 22, 2014

Observed limiting application scalability with Multi process Cobol application on ZFS. 

[ Keyword : ZFS, dmu_zfetch_find, dmu_zfetch, dmu zfetch, Solaris 11, Cobol, high SYS]


Recently we have seen this issue where cobol application was not scaling on ZFS due to default file-level prefetch mechanism in ZFS. Significant system time was spent in file system locking and “prstat –mL” showed application thread is spending 100% CPU in SYS.
$prstat –mL 5

 PID USERNAME  USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
 10523 cobol   0.3 100 0.0 0.0 0.0 0.0 0.0 0.0   0  22  7K   0 runcobol/1
 13050 cobol   0.3 100 0.0 0.0 0.0 0.0 0.0 0.0   2  29  8K   0 runcobol/1
 13387 cobol   0.7  99 0.0 0.0 0.0 0.0 0.0 0.0   5  26 18K   0 runcobol/1
 10654 cobol   0.6  99 0.0 0.0 0.0 0.0 0.0 0.0   3  26 18K   0 runcobol/1
 11444 cobol   1.0  99 0.0 0.0 0.0 0.0 0.0 0.0   6  27 26K   0 runcobol/1
 13504 cobol   1.2  99 0.0 0.0 0.0 0.0 0.0 0.0   7  33 31K   0 runcobol/1
 10155 cobol   1.1  99 0.0 0.0 0.0 0.0 0.0 0.0   9  29 29K   0 runcobol/1
 12271 cobol   0.5  99 0.0 0.0 0.0 0.0 0.0 0.0   9  26 16K   0 runcobol/1
  9907 cobol   1.0  99 0.0 0.0 0.0 0.0 0.0 0.0  15  26 26K   0 runcobol/1
  9684 cobol   0.7  99 0.0 0.0 0.0 0.0 0.0 0.0   8  24 21K   0 runcobol/1

and  lockstat showed “dmu_zfetch_find”  caller is the main consumer.   
# lockstat –C 10
Adaptive mutex spin: 144772 events in 10.051 seconds (14404 events/sec)

Count indv cuml rcnt     nsec Lock                   Caller                 
-------------------------------------------------------------------------------
54159  37%  37% 0.00  2000931 0xc4064c958f98         dmu_zfetch_find+0x2fc  
53930  37%  75% 0.00  1307159 0xc4064c958f98         dmu_zfetch_find+0x3d8  
 5824   4%  79% 0.00     4712 cpu_info_template_lock read_kstat_data+0x174  
 3792   3%  81% 0.00     2498 kstat_chain_lock       kstat_rele+0xc         
 1822   1%  83% 0.00    72808 0xc4046a36ee58         taskq_thread_wait+0x40 
 1657   1%  84% 0.00   124276 0xc4046a36ee58         taskq_thread+0x408     
 1632   1%  85% 0.00     1485 0xc404656960d8         rrw_enter_read+0x4     
 1515   1%  86% 0.00     5833 kstat_chain_lock       kstat_hold+0xc         
Looked around a bit and the Solaris documentation says about this drawback with zfs_prefetch options http://docs.oracle.com/cd/E26502_01/html/E29022/chapterzfs-4.html

Default it will be enabled,  
# echo zfs_prefetch_disable/D |mdb -k
zfs_prefetch_disable:
zfs_prefetch_disable:           0

Then we tried disabling it. Immediate after disabling this, Application started scaling normally.  
We can set this value Dynamically with “ # echo zfs_prefetch_disable/W0t1 |mdb –kw” Or by adding “set zfs:zfs_prefetch_disable=1” to /etc/system and reboot the server.
Ref:  





  

Tuesday, April 9, 2013

Add Timestamp to Solaris vmstat and iostat output

 

One liner to add Timestamp to Solaris vmstat and iostat output


$ vmstat 2 |while read line; do echo -e "$line" '\t' "`date +%T`" ;done

$ iostat -xncz 2 | while read line; do if [[ $line == *extended* ]]; then echo -e "$line" '\t' "`date +%T`"; else echo -e "$line"; fi done

or you can also use like this, putting this script in /usr/bin, you can just run "$vmstat_withtime <interval>"
$vi vmstat_withtime
#!/bin/bash
INTR=$1
if [ $# -eq 0 ]
then
INTR=10
fi
vmstat $INTR |while read line; do echo -e "$line" '\t' "`date +%T`" ;done  


$vi iostat_withtime 
#!/bin/bash
INTR=$1
if [ $# -eq 0 ]
then
INTR=10
fi
iostat -xncz $INTR |while read line; do if [[ $line == *extended* ]]; then echo -e "$line" '\t' "`date +%T`"; else echo -e "$line"; fi done



Friday, January 11, 2013

RMAN Cold BACKUP & RMAN RESTORE Steps

RMAN Cold Backup (compressed backup) script

Below is the simple RMAN script to backup & restore database without any hurdles.

In below script, you need to replace Text which are in Red with your environment details, for example my ORACLE_SID is ORCL1, you need to replace with your DB details.
Note: Here my Database is on ASM diskgroup +DATADG
I used the parallelism 32, you can try with less or more depending on your DB size and compute.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$backup.sh 
PATH=$PATH:$HOME/bin
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
export ORACLE_HOME
ORACLE_SID=ORCL1
export ORACLE_SID
ORACLE_UNQNAME=ORCL
export ORACLE_UNQNAME
PATH=$HOME/bin:$ORACLE_HOME/bin:$PATH
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#mkdir -p /BACKUP/RMAN/log                             #Creating Backup directories to place the backup
#mkdir -p /BACKUP/RMAN/ORCL/controlfile       #Remove comments if directories doesn't exist.
#mkdir -p /BACKUP/RMAN/ORCL/pfile
#mkdir -p /BACKUP/RMAN/ORCL/RMANFullBkp

rman target / log=/BACKUP/RMAN/log/ORCL`date +%d%m%Y`.log <<EOF

configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '/BACKUP/RMAN/ORCL/controlfile/%F.ctl';
CONFIGURE DEVICE TYPE DISK PARALLELISM 32 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/BACKUP/RMAN/ORCL/RMANFullBkp/cold_bkp_%U';

sql "create pfile=''/BACKUP/RMAN/ORCL/pfile/pfileORCL`date +%d%m%Y`.ora'' from spfile";
sql 'alter system checkpoint';
sql 'alter system checkpoint';
shutdown immediate;
startup;
sql 'alter system checkpoint';
shutdown immediate;
startup mount;
backup as compressed backupset database;
alter database open;
report schema;
exit;
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Just run the above script, (./backup.sh) it will shutdown and bring the database to mount state and after backup completion brings back the DB to open state. You can watch the log from other terminal from  /BACKUP/RMAN/log/ dir.  

Restore script:

Assumption : Same ASM disk group name +DATADG (in this case) is already present and respective directories are already created inside +DATADG (In this case +DATADG/ORCL/DATAFILE etc, if it is not present you need to manually create it).

Again replace Text which are in red with appropriate value from backup which you took. 

my $restore.sh script looks as below
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$restore.sh
run
{
#shutdown immediate;
startup pfile='/BACKUP/RMAN/ORCL/pfile/pfileORCL11122012.ora' nomount;
restore spfile from '/BACKUP/RMAN/ORCL/controlfile/c-838456344-20130104-00.ctl';
shutdown immediate;
startup nomount;
restore controlfile from '/BACKUP/RMAN/ORCL/controlfile/c-838456344-20130104-00.ctl';
alter database mount;
restore database;
alter database open resetlogs;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run the restore script as below from oracle user.  
$rman target / 
rman>@restore.sh


Monday, December 31, 2012

Oracle VM Server for SPARC - Cheat sheet / Quick steps to configure VM (LDom)

~~~~~~~~~~~~~~Installation~~~~~~~~~~~~~~~

Download latest binary from Oracle website & upgrade the system firmware.
http://www.oracle.com/technetwork/server-storage/vm/downloads/index.html 

Note: On Oracle Solaris 10 you need to manually install Ldom packages. In Solaris 11 its part of default installation. LDom is supported in Oracle's SPARC T-Series Servers only  
 
unzip LDom_3.0.zip
./Install/install-ldm

You can choose no to ldom configuration assistant if you want to manually configure. Below is just a quick reference of commands to configure ldom(Its recommend you go through the Oracle VM for SPARC (Ldom) administration guide for any details)

 

~~~~~~~~~~~Control Domain Initial Configuration~~~~~~~~~~

1. After installation ldmd service will be enabled by default
svcs svc:/ldoms/ldmd:default

To enable :  
svcadm enable svc:/ldoms/ldmd:default
 

2. Adding virtual disk service
ldm add-vds primary-vds0 primary

3. Adding console service
ldm add-vcc port-range=5000-5100 primary-vcc0 primary

4. Adding Virtual network switch

ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

5.
ldm list-services primary
 

6. Starting delayed configuration, requires reboot
ldm start-reconf primary

7. Setting 2 cores of CPU & 8GB Memory for control domain
ldm set-vcpu -c 2 primary
ldm set-memory 8G primary
ldm add-config initial
ldm list-config


8. IO domain configuration. 
Latest LDom supports PCIE bus/Leaf level devices can be directly allocated to guest domains (Ref Admin Guide for any details)

a. List the devices/Bus which can be added to guest domains. 
ldm list-io
--Remove IO device you want to allocate to guest domain 
b. ldm remove-io <Bus/device> primary    
   ldm add-io <Bus/device> primary

Note: If you remove a Bus, internal disks and onboard network ports associated to that PCI controller/bus will be removed and once you add that bus to guest ldom it will appear in the guest ldom. If you are creating a IO domain no need of virtualising the disks and network (in step 2 & 4). 
   
shutdown -y -g0 -i6

~~~~~~~~~~~~~~~~Configuring Guest LDom~~~~~~~~~~~~~~~~~~

1. Create a logical domain.
ldm add-domain ldom1

2. Add CPUs to the guest domain.

ldm set-vcpu -c 4 ldom1

3. Add memory to the guest domain.
 

ldm set-memory 8G ldom1

4. Add IO device/bus directly to guest domain.

You can add io which is removed from primary in step 8 above. so that on-board disk & network which are associated with that bus or leaf level devices will be directly available on the guest LDom. You may not need to virtualise as is below step 5 & 6. 

ldm list-io
ldm add-io <bus/device> ldom1     

5. Add a virtual network device to the guest domain.

ldm add-vnet vnet1 primary-vsw0 ldom1

6, Add virtual disk to LDom (with any of the below three option) 

a. Adding physical disk

ldm add-vdsdev /dev/dsk/c0t0d0s2 vol1@primary-vds0
ldm add-vdisk vdisk0 vol1@primary-vds0 ldom1

.b Create (mkfile)and add as another disk (install_disk) on which to install the Solaris OS.
primary# mkfile 30g /LDomFS/Ldisk1 

primary# ldm add-vdsdev /LDomFS/Ldisk1 install_vol@primary-vds0
primary# ldm add-vdisk vdisk_install install_vol@primary-vds0 ldom1 

.c Create disk with zfs create -v and add as disk (install_disk) on which to install the Solaris OS.
(If you are using zfs file system you can also create disk as below and use zfs snapshot/clone feature if you want to replicate LDoms later) 
 primary# zfs create -V 30g rpool/Ldisk1 
 primary# ldm add-vdsdev /rpool/Ldisk1 install_vol@primary-vds0
 primary# ldm add-vdisk vdisk_install install_vol@primary-vds0 ldom1
 

7,
a. Adding OS ISO Image and Installing

ldm add-vdsdev /solaris/sol-10-u10-ga2-sparc.iso iso_vol@primary-vds0
ldm add-vdisk vdisk_iso iso_vol@primary-vds0 ldom1

b. Installing from Physical DVD drive.

svcadm disable volfs
ldm add-vdsdev /dev/dsk/c0t0d0s2 dvd_vol@primary-vds0
ldm add-vdisk vdisk_cd_media dvd_vol@primary-vds0 ldom1


8. set auto-boot variable false to go to 'ok' prompt

ldm set-variable auto-boot\?=false ldom1
or
ldm set-variable auto-boot\?=true ldom1

ldm set-var boot-device=vdisk ldom1 


9. Enable console service
svcadm enable vntsd
svcs -l vntsd

 
Finally:
ldm bind ldom1


ldm list-domain

ldm start ldom1 or ldm stop ldom1


10. Save the config
ldm add-config complete  


~~~~~~Installing OS~~~~~

ldm list

telnet localhost 5000
{0} ok show-disks
{0} ok devalias
{0} ok boot vdisk_iso


Proceed with regular installation. 

Ctr ]
>q -> to come out from telnet session