Friday, 16 February 2018

ORA-65096: invalid common user or role name and ORA-65049: creation of local user or role is not allowed in CDB$ROOT



This error usually occurs due to we are trying to create user ( common user ) under root container. In oracle 12c there is two type of users: common user and local user.
Common users belongs to CBD’s as well as current and future PDB’s. It means it can performed operation in Container or Pluggable according to Privileges assigned. For more information about common user.
Local users is purely database that belongs to only single PDB. This user may have administrative privileges but this only belongs to that PDB. For more information about local user.
// Consider following example in which i am trying to create common user in container root.
SQL> show con_name
CON_NAME
——————————
CDB$ROOT
SQL> create user scott identified by scott;
create user scott identified by scott
*
ERROR at line 1:
ORA-65096: invalid common user or role name
SQL> create user scott identified by scott container=current;
create user scott identified by scott container=current
*
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT
SQL> create user scott identified by scott container=all;
create user scott identified by scott container=all
*
ERROR at line 1:
ORA-65096: invalid common user or role name


FYI.. Due to below,its causing error....

[O-Image]




















FIX:
//If you wish to create common user under CDB$ROOT than create user start with C## and c##, as follows:
Note:
  • Common user will be created under root container only.
  • Current container must be set to CDB$ROOT.
SQL> create user C##scott identified by scott;
User created.
SQL> create user c##scott identified by scott container=all;
User created.
OR
// Creating local user in PDB:
SQL> alter session set container=sales;
Session altered.
SQL> sho con_name
CON_NAME
——————————
SALES
SQL> create user test identified by test;
User created.



Thursday, 26 October 2017

Validating OBIEE 11G Catalog


Validating the Catalog
Over time, inconsistencies can develop in the catalog as links are broken, 
users are deleted, or NFS file system issues are encountered. 
These inconsistencies can eventually lead to incorrect behavior, 
such as the inability to edit an agent's recipient list. 
You can periodically take the production system offline and validate the catalog, 
to be informed of and to take corrective action on inconsistencies.
Performing a Basic Validation of the Catalog
Use the following procedure to perform a basic validation of the catalog.
To validate the catalog:
1.     Stop Presentation Services.
2.     In 11g only, regenerate the user GUIDs for the catalog
Ensure that you regenerate the user GUIDs before validating the catalog. 
Failure to do so can result in the removal of all accounts, permissions,
 and privileges from the catalog.
1.     Update the FMW_UPDATE_ROLE_AND_USER_REF_GUIDS parameter in NQSConfig.INI:
a.     Open NQSConfig.INI for editing at:
b.  ORACLE_INSTANCE/config/OracleBIServerComponent/coreapplication_obisn
c.     Locate the FMW_UPDATE_ROLE_AND_USER_REF_GUIDSparameter and
 set it to YES, as follows:
d.  FMW_UPDATE_ROLE_AND_USER_REF_GUIDS = YES;
e.     Save and close the file.
2.     Update the Catalog element in instanceconfig.xml:
a.     Open instanceconfig.xml for editing at:
b.  ORACLE_INSTANCE/config/OracleBIPresentationServicesComponent/
coreapplication_obipsn
c.     Locate the Catalog element and update it as follows:
a.  -<Catalog
b.  -<UpgradeAndExit>
false
c.  UpdateAccountGUIDs>UpdateAndExit
d.  /Catalog>

h.     Save and close the file.
3.     Restart the Oracle Business Intelligence system components using opmnctl:
4.  cd ORACLE_HOME/admin/instancen/bin
5.  ./opmnctl stopall
6.  ./opmnctl startall
7.     Set the FMW_UPDATE_ROLE_AND_USER_REF_GUIDS parameter in NQSConfig.INI back to NO.
Important: You must perform this step to ensure that your system is secure.
8.     Update the Catalog element in instanceconfig.xml to remove the UpdateAccount GUIDs entry.
9.     Restart the Oracle Business Intelligence system components again using opmnctl:
a.  \Oracle_BI1\opmn\bin>opmnctl stoptall
b.   \Oracle_BI1\opmn\bin>opmnctl startall

3.     Back up the catalog by using the 7-Zip utility to create a compressed file for it.
4.     Create a backup copy of the instanceconfig.xml file.
5.     Edit the instanceconfig.xml file so that it contains the appropriate elements for 
performing the validation.
6.     Start Presentation Services to run the validation according to the values that you 
specified in the instanceconfig.xml file.
7.     Stop Presentation Services.
8.     Create a backup copy of the instanceconfig.xml file in which you added the validation 
elements, renaming the file similar to instanceconfig_validate.xml. In this way, you have 
a version of the file to use as a starting point for subsequent validations.
9.     Restore the backup version of the instanceconfig.xml that you created earlier to use
 as the current version.
10.  Start Presentation Services.
Specifying the Elements for Validating the Catalog
As part of the process of validating the catalog, you include elements in the instanceconfig.xml file
 that run the validation when you restart Presentation Services. The following procedure describes
 how to edit the instanceconfig.xml file to include these elements.
To specify the element for validating the catalog:
1.    Open the instanceconfig.xml file for editing.
2.     File Location in\instances\instance1\config\OracleBIPresentationServicesComponent
\coreapplication_obips1
3.    Locate the Catalog section in which you must add the elements that are described in Table 17-1.
4.  Include the elements and their ancestor element as appropriate, as shown in the followi
ng example. In this example, the validation runs when Presentation Services starts. Inconsistent 
accounts (such as those for deleted users), links, and objects are removed. Inconsistent users'
 home directory names are logged but directories are not removed.
<ps:Catalogxmlns:ps="oracle.bi.presentation.services/config/v1.1">
      <UpgradeAndExit>false</UpgradeAndExit>
         
  <ps:Validate>OnStartupAndExit</ps:Validate>
  <ps:ValidateAccounts>Clean</ps:ValidateAccounts>
  <ps:ValidateHomes>Report</ps:ValidateHomes>
  <ps:ValidateItems>Clean</ps:ValidateItems>
  <ps:ValidateLinks>Clean</ps:ValidateLinks>
  </ps:Catalog>

Caution:
Include only one Catalog element in the instanceconfig.xml file or unexpected results 
might occur. Unless expressly noted, include most nodes in an XML document only once.
5.    Save your changes and close the file.

In OBIEE 11g it is important that we validate the catalog after upgrading the
 catalog from 10g so that the catalog is consistent and ready for migration to
 production environments.

As part of the process of validating the catalog, you include elements in the
 instanceconfig.xml file that run the validation when you restart Presentation Services.


The validation runs when Presentation Services starts, and Presentation
Services is stopped when the validation is complete. Inconsistent accounts
(such as those for deleted users), links, and objects are removed.
 Inconsistent users' home directory names are logged but directories are not removed.

<ServerInstance>
<Catalog>
    <Validate>OnStartupAndExit</Validate>
    <ValidateAccounts>Clean</ValidateAccounts>
    <ValidateHomes>Report</ValidateHomes>
    <ValidateItems>Clean</ValidateItems>
    <ValidateLinks>Clean</ValidateLinks>
</Catalog>
</ServerInstance>



  1.  OnStartupAndExit
        Clean
        Report
        Clean
        Clean
    
    
    
    Caution:
    Include only one Catalog element in the instanceconfig.xml file or unexpected results might occur. Unless expressly noted, include most nodes (such as that for the Catalog element) in an XML document only once.
  2. Save your changes and close the file.
Table 17-1 Elements for Validating the Catalog
ElementDescriptionDefault Value
ValidatePerforms the validation of the catalog according to the values of the other Validate-related elements in this section. Values are described in the following list:
  • None — Performs no validation.
  • OnStartupAndExit — When Presentation Services starts, performs the validation, performs the Report or Clean operation, then stops Presentation Services. You run through multiple cycles of Report, Clean, Report, and so on for each element (such as ValidateAccounts, ValidateHomes, ValidateItems, and ValidateLinks) until the catalog is validated.
If this value is not None, then all privileges and each object's ACLs in the entire catalog are cleaned of terminated accounts, regardless of the settings of the other Validate-related elements.
None
ValidateAccountsVerifies that all information about users, roles, and groups in the catalog is consistent. Values are described in the list after this table.None
ValidateHomesVerifies that all information about home directories in the catalog is consistent. Values are described in the list after this table. ValidateHomes is executed only if ValidateAccounts is set to either Report or Clean.None
ValidateItemsVerifies that all information about objects in the catalog is consistent. Values are described in the list after this table.None
ValidateLinksCleans shortcuts in the catalog, but does not reconcile internal references to objects. For example, suppose that a dashboard page includes the text: "display the results here after running /shared/sales/myfavreport". If a user subsequently deletes the myfavreport object, then no fix or message is indicated during validation. Values are described in the list after this table.None
The elements have the values that are described in the following list:

Monday, 23 October 2017

QUICK POST: HYPERION FINANCIAL REPORTING STUDIO “RUN-TIME ERROR 429”



After installing Hyperion Financial Reporting Studio on my client’s laptop, I tested out my install connection. When I clicked on the program from my Start menu, I got an error that said, “Run-time error 429”. What?
The solution ended up being an easy one…in your C:\Oracle\Middleware\EPMSystem11R1\products\FinancialReportingStudio\products\financialreporting\install\bin folder is a Windows command script called “HRRunAnt”. Double-click that baby, let it run and, voila, HFR will work!
So what does this command script do? Well, it sets the Ant home, the Java home and adds to the PATH environment variable for HFR, Java and Ant (bin directories, at least). Essentially, telling ET where to phone home.
…Now you curious ones may be wondering what Ant is… It is a Java-based tool used for building Java products. No need to go in deeper than that!

Saturday, 26 August 2017

EBS 12.2.5 Install Error : Fatal Error: TXK Install Services

  Symptoms

     EBS 12.2 install Error : Fatal Error: TXK Install Service

Cause


This is related to an issue with the buildStage script on RHEL7 platform. Our developers are working to deliver a fix for this soon.

As a workaround,you can perform the following work

1.abandon the installation work.

2.delete the directory/files you have already installed by rapid install.
3.manually copy the filegroup3.jar.modified from the /startCD/Disk1/rapidwiz/etc to /TechInstallMedia/ohs11117/Disk3/stage/Components/oracle.calypso/11.1.1.7.0/1/DataFiles as filegroup3.jar
eg: cp /startCD/Disk1/rapidwiz/etc/filegroup3.jar.modified /TechInstallMedia/ohs11117/Disk3/stage/Components/oracle.calypso/11.1.1.7.0/1/DataFiles/filegroup3.jar
4.Redo the install work
RHEL 7 with Start CD 50
Error:
oracle.apps.fnd.txk.config.ProcessStateException: OUI process failed : Exit=1 See log for details. CMD= /b01/1225/TechInstallMedia/ohs11117/Disk1/runInstaller -waitForCompletion -ignoreSysPrereqs -force -silent -responseFile /u01/oracle/VIS/fs2/inst/apps/VIS_af82/temp/cfgHome/response/APPS_OHS_HOME/txkOHS_11117.rsp

at oracle.apps.fnd.txk.config.OUIPatchActionNode.processState(OUIPatchActionNode.java:159)

at oracle.apps.fnd.txk.config.PatchActionNode.processState(PatchActionNode.java:187)
at oracle.apps.fnd.txk.config.PatchNode.processState(PatchNode.java:338)
at oracle.apps.fnd.txk.config.PatchesNode.processState(PatchesNode.java:79)
at oracle.apps.fnd.txk.config.InstallNode.processState(InstallNode.java:68)
at oracle.apps.fnd.txk.config.TXKTopology.traverse(TXKTopology.java:594)
at oracle.apps.fnd.txk.config.InstallService.doInvoke(InstallService.java:224)
at oracle.apps.fnd.txk.config.InstallService.invoke(InstallService.java:237)
at oracle.apps.fnd.txk.config.InstallService.main(InstallService.java:291)
Cannot install Web Tier Utilities

RW-50010: Error: – script has returned an error: 1

RW-50004: Error code received when running external process. Check log file for details.
Running APPL_TOP Install Driver for VIS instance

Solution


1.abandon the installation work.

2.delete the directory/files you have already installed by rapid install.
3.manually copy the filegroup3.jar.modified from the /startCD/Disk1/rapidwiz/etc to /TechInstallMedia/ohs11117/Disk3/stage/Components/oracle.calypso/11.1.1.7.0/1/DataFiles as filegroup3.jar
eg: cp /startCD/Disk1/rapidwiz/etc/filegroup3.jar.modified /TechInstallMedia/ohs11117/Disk3/stage/Components/oracle.calypso/11.1.1.7.0/1/DataFiles/filegroup3.jar
4.Redo the install wor
k

Wednesday, 2 August 2017

RMAN-06403: Could not obtain a fully authorized session tips 


RMAN-03002: failure of allocate command at 01/13/2015 04:27:08

RMAN-06403: could not obtain a fully authorized session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory Additional information: 3640 Additional information: 1158587339



SOLUTION:

One, you are trying to work with a wrong database which does not exist. Second, you want to work with the correct database only, but it has not started yet. Looking further, your doubt turns into confirmation as the next message is ORA-01034 saying that Oracle is not available which means that your database is not yet started. 

So, Please check that you are giving proper DB SID without caps or mistakes.





Tuesday, 1 August 2017

GSS in Oracle apps

Gather Schema Statistics program generates statistics that quantify the data distribution and storage characteristics of tables, columns, indexes, and partitions.

The cost-based optimization (CBO)  uses these statistics to calculate the selectivity of prediction and to estimate the cost of each execution plan.

As a general rule, run Gather Schema Statistics under the following circumstances:

1.  After there has been a significant change in data in either content or volume.

2.  After importing data.

3.  Any time end-users notice deterioration in performance in routine day-to-day business transactions or when running concurrent programs.

4.  Run on a regular basis (weekly at a minimum) and anytime after application of patch, conversion, etc.


Estimate Percentage / Modification threshold defines the percentage which should be used to initiate gather stats for those objects which have actually changed beyond the threshold.
 The default is 10% (i.e. meaning any table which has changed via DML more than 10%, stats will be collected, otherwise it will be skipped).


How to run Gather Schema Statistics 
concurrent program:

1. Log on to Oracle Applications with
    Responsibility = System Administrator

2. Submit Request Window
    Navigate to: Concurrent > Requests

3. Query for the Gather Schema Statistics

4. Enter the appropriate parameters. This can be run for specific schemas by specifying the schema name or entering  ‘ALL’  to gather statistics for every schema in the database

5. Submit the Gather Schema Statistics program


Parameters :

------------------
Schema Name:  Schema for which statistics are to be gathered. Specify ALL for all Oracle Applications schemas

Percent:  The sampling percentage. If left blank, the default value of 10 is used. The valid range is from 0 to 100

Degree:  The degree of parallelism to be used for gathering statistics. If a Degree is not provided, it defaults to the minimum ofparallel_max_servers and cpu_count.

Backup Flag:  NOBACKUP is used, then the GATHER_SCHEMA_STATS procedure will not backup the current statistics. This way the GATHER_SCHEMA_STATS procedure will run faster.

Restart Request ID:  In the case where the Gather Schema Statistics run fails due to whatever reasons, the concurrent request can be re-submitted and it will pick up where the failed run left off, if you provide the concurrent request_id of the failed run.
History Mode:  Last Run – History records for each object are maintained only for the last gather statistics run. Each subsequent run will overwrite the previous history record for the object. This is the default behavior

Gather Options:  GATHER: All tables and indexes of the schema schema name are selected for stats gathering. This is the default

Modifications Threshold:  Applicable only to GATHER AUTO and LIST AUTO Options

Invalidate Dependent Cursors:  This flag indicates whether cursors dependent on the table being analyzed should be invalidated or not. By default, dependent cursors are invalidated.


How to Gather the Statistics of Custom Schema 
when we submit the concurrent request called 
Gather Schema Statistics :


When we submit Gather Schema Stats with Parameter  ALL, concurrent request will complete successfully, and DBAs will not realize that custom schemas are not analyzed.

Sql > select count(table_name)  from  dba_tables  where  last_analyzed  is  not null  and  owner= <custom_schema_name>;

Here you realize none of the tables in custom schema are analyzed.


Gather Schema Statistics program gathers statistics for all schemas , however it skips custom schemas registered in Oracle Applications.

Reason:  

Whenever Custom schemas are registerd in Oracle Applications , the entries are done in 2 tables
ie  FND_ORACLE_USERID  and  FND_APPLICATIONS_TL
However , when Gather schema statistics is submitted it uses the below query to get schema information
Sql > select distinct upper(oracle_username) sname
          from fnd_oracle_userid a,
         fnd_product_installations b
         where a.oracle_id = b.oracle_id
         order by sname;
Note : When custom schemas are created the entry is not made in  FND_PRODUCT_INSTALLATIONS  and hence it is not picked up in the above query.

Solution :

How can we make an entry in fnd_product_installations so that it is picked up by Gather Schema Stats. Follow below steps
Go to the Responsibility called Alert Manager and Navigate to the form -> Installations under Systems Menu.
Define custom application in this form. Go to the last record and make entry for custom applications. Once this is done , it will insert an entry in fnd_product_installations.
Submit Gather Schema stats and then query dba_tables and you will realize , stats are being gathered for custom schemas as well.