Monday 23 August 2021

SQL to keep Concurrent requests on Hold

 

1)Create table apps.conc_req_on_hold_DDMMYY as select fcr.*, fcrv.user_concurrent_program_name from fnd_Concurrent_requests fcr, fnd_concurrent_programs_VL fcrv where fcr.PHASE_CODE='P' and fcr.hold_flag='N' and fcr.requested_start_date > sysdate and fcr.CONCURRENT_PROGRAM_ID=fcrv.CONCURRENT_PROGRAM_ID;

 

2) select count(*) from apps.conc_req_on_hold_DDMMYY

 

3) update fnd_Concurrent_requests set hold_flag='Y' where PHASE_CODE='P' and hold_flag='N' and requested_start_date > sysdate and request_id in (select request_id from  apps.conc_req_on_hold_DDMMYY);

 

NOTE: You have to commit if select & update are same number of records. Otherwise rollback and try again till the numbers are same

 

4) Commit;

 

To Release hold on Concurrent Requests after the maintenance window, run the below sql :

 

5) update fnd_Concurrent_requests set hold_flag='N' where request_id in (select request_id from apps.conc_req_on_hold_DDMMYY);

 

6)Commit the changes

 

commit;

Wednesday 18 August 2021

DR Maintenance Steps sequence

Pre-steps in all db and apps servers

=======================

Take Backup of below things

1. df -hP o/p of all app and db servers

2. cp -rp /etc/resolv.conf      /etc/resolv.conf_bkp

3.

service iptables status

service ip6tables status

4.    comment cron if needed

5. date


DR STOP pre-steps:

=================

1. select max( sequence#)  from v$archived_log where applied='YES'; -- From primary

2. select max( sequence#)  from v$archived_log where applied='YES'; -- From DR

Make sure both match same number 

3. Take o/of below from DR

ps -ef| grep tns

sho parameter db_name, db_unique, local, archive log list

4. Take backup of parameter file

show parameter pfile, spfile

create pfile='?dbs/init_pfile.ora_bkp' from spfile;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER scope=both; -   from PRIMARY

On Standby:

No RFS Process showing when query below:

SELECT PROCESS,  STATUS, SEQUENCE#  FROM V$MANAGED_STANDBY;

5. Stop listener On standby server:

ps -ef| grep tns

lsnrctl stop listername

6. Cancel Managed Recovery from standbyb

ps -ef| grep mrp

SQL> alter database recover managed standby database cancel;

verify its not running -  ps -ef| grep mrp

7. Comment cron in DB -- Take backup before - Standby only

8. - Shutdown the Standby Database

connect to Primary - check status

STANDBY -   SQL> shutdown immediate

ps -ef| grep mrp

ps -ef| gep tns

DR start:

======

In  all app server:

Validate the mount points as below.

Restore the /etc/resolv.conf file from resolv.conf_25aug20 backup.

          cp -p /etc/resolv.conf_25aug20 /etc/resolv.conf

Disable firewall running on all 3 servers as per below command:

         service iptables save

         service iptables stop

         service ip6tables stop

         chkconfig iptables off

 IN DR:

1. Uncomment Cron

2. Start db 

SQL> startup mount;

SQL> alter database recover managed standby database disconnect from session;

3. Start Database Listener

    lsnrct start listemer - STANDBY

4. Set log transport state to ENABLE status:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both;

5. select open_mode, database_role, name from v$database;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both;

6. SELECT PROCESS,  STATUS, SEQUENCE#  FROM V$MANAGED_STANDBY;

7. @gap

8. select max( sequence#)  from v$archived_log where applied='YES'; -- From primary

 select max( sequence#)  from v$archived_log where applied='YES'; -- From DR

9. archive log list, check db_name. unique, spfile, alert logs

10. dgmgrl / , show configuration, -- primary side

show database verbose 'db_name_primary', 

show database verbose 'db_name_DR'

Thursday 5 August 2021

 

Deleting an Application Tier Nodes in Multi Tier environment EBS 12.2


There are some cases where we do wrong configuration of system then we need to cleanup that node and then add it back / if nodes have been abandoned then also we need to follow delete and add and there are many cases where we will be using simple script for deletion of the node.


If Host is accessible :


$ perl $AD_TOP/patch/115/bin/adProvisionEBS.pl ebs-delete-node -contextfile=$CONTEXT_FILE


If host is not accessible then run below from other node.


$ perl $AD_TOP/patch/115/bin/adProvisionEBS.pl ebs-delete-node -contextfile=$CONTEXT_FILE -hostname=<Host2bdeleted>


Things it will take care :


1. It checks the admin server in both RUN & PATCH file system


2. Cleaning up database topology information for node Host mentioned in both RUN & Patch File system.


From AD_NODES_CONFIG_STATUS 1 row deleted successfully.

From fnd_oam_context_files 7 row deleted successfully.

From FND_NODES 1 row deleted successfully.

From ADOP_VALID_NODES 1 row deleted successfully.

Deleting RUN file system configuration information.

Server oacore_server4 deleted successfully.

Server forms_server4 deleted successfully.

Server oafm_server4 deleted successfully.

Server forms-c4ws_server4 deleted successfully.


3. Once all above done it will cleanup OHS


OHS instance EBS_web_<Instance>_Instance_Num unregistered successfully.