Feed aggregator
Ansible Event Driven Automation
In my previous blogs about Ansible, I covered a few use cases:
- Simple playbooks
- Use Ansible like a Programming Language
- REST API calls
- Gather logs and runtime information
- Infrastructure as Code with YaK: on-premise, create a component, and two blogs regarding JBoss EAP (here and here).
All these still require a manual trigger to run. With Ansible Event Driven (AED), we can be pro-active by running playbooks automatically based on events.
InstallationTo install, it is just a few commands:
- Install pip3 and a JDK:
sudo apt install python3-pip openjdk-17-jdk maven
- Set variables (or add them in ~/.bash_profile to make them permanent):
export PATH=$PATH:$HOME/.local/bin
export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
export PIP_NO_BINARY=jpy
export PATH=/home/vagrant/.local/bin:$PATH
- lnstall required Python packages:
pip install wheel ansible-rulebook ansible ansible-runner
- Finally, install eda collection:
ansible-galaxy collection install community.general ansible.eda
Now, we are ready to run our first rulebook.
RulebookAED introduce a new tool as well as a new type of file: The rulebook which is associated with ansible-rulebook
command. The rulebook is a yaml file, so we are in a known environment.
This file consists of three sections:
- Sources: It defined the event used as input.
- Rules: A filter on event to determine when to apply actions.
- Actions: Playbooks, modules or tasks.
Here is simple example from the official GitHub repository with a webhook:
---
- name: Listen for events on a webhook
hosts: all
## Define our source for events
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
## Define the conditions we are looking for
rules:
- name: Say Hello
condition: event.payload.message == "Ansible is super cool"
## Define the action we should take should the condition be met
action:
run_playbook:
name: say-what.yml
Create a simple inventory (for testing purpose only):
all:
hosts:
localhost:
ansible_connection: local
As well as the playbook we will run:
The playbook say-what.yml:
- hosts: localhost
connection: local
tasks:
- debug:
msg: "Thank you, my friend!"
To run it (with verbosity so we can see something):
ansible-rulebook --rulebook webhook.yml -i inventory.yml --verbose
Output:
2023-02-03 16:15:30,048 - ansible_rulebook.app - INFO - Starting sources
2023-02-03 16:15:30,048 - ansible_rulebook.app - INFO - Starting rules
2023-02-03 16:15:30,048 - ansible_rulebook.engine - INFO - run_ruleset
2023-02-03 16:15:30,791 - ansible_rulebook.engine - INFO - ruleset define: {"name": "Listen for events on a webhook", "hosts": ["all"], "sources": [{"EventSource": {"name": "ansible.eda.webhook", "source_name": "ansible.eda.webhook", "source_args": {"host": "0.0.0.0", "port": 5000}, "source_filters": []}}], "rules": [{"Rule": {"name": "Say Hello", "condition": {"AllCondition": [{"EqualsExpression": {"lhs": {"Event": "payload.message"}, "rhs": {"String": "Ansible is super cool"}}}]}, "action": {"Action": {"action": "run_playbook", "action_args": {"name": "say-what.yml"}}}, "enabled": true}}]}
2023-02-03 16:15:30,793 - ansible_rulebook.engine - INFO - load source
2023-02-03 16:15:31,294 - ansible_rulebook.engine - INFO - load source filters
2023-02-03 16:15:31,296 - ansible_rulebook.engine - INFO - Calling main in ansible.eda.webhook
2023-02-03 16:15:31,298 - ansible_rulebook.engine - INFO - Start a playbook action runner for Listen for events on a webhook
2023-02-03 16:15:31,299 - ansible_rulebook.engine - INFO - Waiting for actions on events from Listen for events on a webhook
2023-02-03 16:15:31,302 - ansible_rulebook.engine - INFO - Waiting for events from Listen for events on a webhook
Then, with a simple curl command, we will trigger the web hook:
curl -H 'Content-Type: application/json' -d '{"message": "Ansible is super cool"}' 127.0.0.1:5000/endpoint
In the output of the running command, we will see some new logs as well as something familiar to any ansible developer: The playbook running.
2023-02-03 16:24:53,960 - aiohttp.access - INFO - 127.0.0.1 [03/Feb/2023:16:24:53 +0000] "POST /endpoint HTTP/1.1" 200 158 "-" "curl/7.74.0"
2023-02-03 16:24:53 962 [main] INFO org.drools.ansible.rulebook.integration.api.rulesengine.RegisterOnlyAgendaFilter - Activation of effective rule "Say Hello" with facts: [Event DROOLS_PROTOTYPE with values = {meta.headers.Accept=*/*, meta.headers.Host=127.0.0.1:5000, payload={message=Ansible is super cool}, payload.message=Ansible is super cool, meta={headers={Accept=*/*, User-Agent=curl/7.74.0, Host=127.0.0.1:5000, Content-Length=36, Content-Type=application/json}, endpoint=endpoint}, meta.headers.Content-Type=application/json, meta.headers.User-Agent=curl/7.74.0, meta.endpoint=endpoint, meta.headers.Content-Length=36, meta.headers={Accept=*/*, User-Agent=curl/7.74.0, Host=127.0.0.1:5000, Content-Length=36, Content-Type=application/json}}]
2023-02-03 16:24:53,964 - ansible_rulebook.rule_generator - INFO - calling Say Hello
2023-02-03 16:24:53,964 - ansible_rulebook.engine - INFO - call_action run_playbook
2023-02-03 16:24:53,965 - ansible_rulebook.engine - INFO - substitute_variables [{'name': 'say-what.yml'}] [{'event': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'headers': {'Accept': '*/*', 'User-Agent': 'curl/7.74.0', 'Host': '127.0.0.1:5000', 'Content-Length': '36', 'Content-Type': 'application/json'}, 'endpoint': 'endpoint'}}, 'fact': {'payload': {'message': 'Ansible is super cool'}, 'meta': {'headers': {'Accept': '*/*', 'User-Agent': 'curl/7.74.0', 'Host': '127.0.0.1:5000', 'Content-Length': '36', 'Content-Type': 'application/json'}, 'endpoint': 'endpoint'}}}]
2023-02-03 16:24:53,966 - ansible_rulebook.engine - INFO - action args: {'name': 'say-what.yml'}
2023-02-03 16:24:53,967 - ansible_rulebook.engine - INFO - Queue playbook/module/job template {'say-what.yml'} for running later
2023-02-03 16:24:53,967 - ansible_rulebook.builtin - INFO - running Ansible playbook: say-what.yml
2023-02-03 16:24:53,970 - ansible_rulebook.builtin - INFO - ruleset: Listen for events on a webhook, rule: Say Hello
2023-02-03 16:24:53,971 - ansible_rulebook.builtin - INFO - Calling Ansible runner
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Thank you, my friend!"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We confirmed that our Ansible Event Driven installation is working. On my next blog, I will integrate it with Apache Kafka, the well know event store and stream-processing platform.
L’article Ansible Event Driven Automation est apparu en premier sur dbi Blog.
Types of EBS Snapshots in AWS and How to Delete Them
Eventually you would have to cleanup and delete your EBS snapshots in AWS. But when you try to do that, things get interesting as its not that straight forward. This video explains the difference between AWS snapshots and how to delete them.
OGG-12031 error … What happened?
On one of our projects, we have been using an Oracle GoldenGate (Microservices) hub architecture to run the […]
The post OGG-12031 error … What happened? appeared first on DBASolved.
Best Gitops Tools for Devops Engineers
Let's have a quick overview of what Gitops is. Gitops is a new cool kid in the town. Gitops is a subset of Devops. GitOps is a practice that helps automate application deployment and infrastructure provisioning. It typically involves using Git, an open source version control system, as a single source of truth for declarative infrastructure and applications. Gitops is mostly used in deploying containerized application on Kubernetes cluster along with Kubernetes resources. I have another video in more detail around this and the link is in the description.
Migration with Multitenant: how easy it is!
On Oracle Database, Multitenant feature will be turning 9 in July, but it’s not yet widely deployed as most of the databases are still running on non-CDB architecture. With 19c, Multitenant is now mature and free if you stick to a maximum of 3 PDBs per CDB. If the benefits of Multitenant are not obvious at first glance, it’s a game changer when it comes to future migrations.
Migration without MultitenantIt’s been decades that migration is done by mostly choosing between these 2 methods:
- in-place upgrade of the database files by connecting them to a new DB home and running the catalog upgrade
- export data from source DB to a dumpfile and import this dumpfile into a newly created database already running the target version
The first option is fast, as soon as you stay on the same server, but after years using this method you still keep the old database structure with its legacy configurations, which is not so good. I remember, several years ago, working on an Exadata and not being able to use modern features because database files came from an old 8i or 9i.
The second migration option is the most popular and pretty clean, but rather slow because it’s based on DDL and DML statements. And you first need to identify schemas and dependencies you’d like to migrate. It’s easier to do a full export/import operation, but it generates a lot of errors you must analyze and solve. It’s because dictionary metadata and objects metadata are mixed, and importing dictionary metadata in a new database is not possible as dictionary already exists. But DBAs get used to this.
A third option is a combination of both: exporting objects metadata only and copying selected datafiles to the new database. It’s called Transportable Tablespaces, but it also needs schemas and dependencies analysis, and it only works if every segment resides in its expected tablespace.
Purpose of a container in MultitenantA container database (CDB) will not be as important as a non-container database (non-CDB). It’s important as long as pluggable databases (PDBs) are running inside. But as your PDBs are easily movable, at some point you will move them. And the old container becomes useless. One of the reasons of moving these PDBs is migrating them to a newer version.
This is why your container shouldn’t have a name related to the applications it runs. The PDB has its own name, and the associated service will follow the PDB when you move it to another container.
Imagine you are preparing the 23c migration of the 3 PDBs in your 19c container C1917TST. You will create a new C2301TST container, with the same settings as C1917TST, move the PDB to this new container, and your migration is done. You can then delete the old container. Let’s test this.
Testing a migration from 19c to 21cThe 23c is the next long term release, but it’s not yet available. So the tests will be done with the current innovation release: 21c.
Migrating a PDB to a newer version could be limited to unplugging the database and plugging it into a new container. In this example, source CDB is 19c, and target will be 21c.
I’m pretty sure it will work fine, but as I’m using multitenant, I can duplicate my PDB and I will safely migrate the copy instead. If something goes wrong, I still have my original PDB with its original name in the source CDB.
. oraenv <<< CDB19C1
sqlplus / as sysdba
create pluggable database JRQDUP from JRQO4D;
alter pluggable database JRQDUP open;
...
alter pluggable database JRQDUP save state;
alter pluggable database JRQDUP close immediate;
Now let’s unplug this PDB and remove it from the source CDB:
alter pluggable database JRQDUP unplug into '/home/oracle/jrqdup.xml';
drop pluggable database JRQDUP keep datafiles;
exit
Let’s plug it into the new container. The MOVE option is used to move the datafiles to the correct subfolder:
. oraenv <<< CDB21C4
create pluggable database JRQDUP using '/home/oracle/jrqdup.xml' MOVE;
alter pluggable database JRQDUP open;
...
Opening this PDB is not immediate, let’s have a look at what’s happening in another sqlplus session:
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 JRQDUP MIGRATE YES
PDB is first opened in restricted mode to do the upgrade of the metadata.
Once finished, my PDB is automatically opened in normal mode:
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 JRQDUP READ WRITE NO
And that’s it. Upgrade of the PDB is automatic when opening the database.
Next step would be stopping and dropping the source PDB and renaming the new one if you want to keep its original name.
ConclusionWith Multitenant, it’s very easy to migrate to a newer version. Just unplug the PDB you want to migrate from the old container and plug it into the new one, and Oracle will do the migration job.
L’article Migration with Multitenant: how easy it is! est apparu en premier sur dbi Blog.
Collection Variable Types
altering table to enable foreign key constraint doesn't utilize parallel query when desired
Oracle dump into csv file have a auto padded field's vale
PLSQL_WARNINGS parameter doesn't stick
Upgrade catalogue
This is a list of articles I’ve written that pick up some details (usually problems or bugs) when upgrading. Each entry has a date stamp and a short note of the contents. The articles are generally listed most-recent first but the catalogue is in two sections: Benefits and Bugs and Limitations.
Benefits (and other observations)- Window Sorts (Nov 2022): A possible performance bug that you might never notice in large window sorts disappears in 21c
- Incomplete hinting(July 2022): Hints may “stop working” on an upgrade as a side effect of of new transformations
- Index statistics (April 2022): Upgrading to 19c the code to gather index stats auto_sampl_size can use the “approximate NDV” mechanism on a 100% sample – better stats, but may take more time.
- Optimizer tweak (July 2021): A little cunning appearing in 19c that eliminates redundant function calls, and gets better cardinality estimates: but that could make execution plans change.
- Index
- Descending Index tweak (Dec 2020): From 18c to 19c – an enhancement/correction to the optimizer code allows it to use descending indexes unhinted in more cases where they are appropriate.
- Name lengths (Dec 2020): 19c allows all sorts of object names to be 128 characters instead of 30. You may need to adjust some of your code (formatting, or log table definitions) to allow for this.
- New delete pattern (June 2014): Upgrading from 11g you will find that Oracle can delete through an index fast full scan. This is not always a good idea.
- Upgrade Surprise (March 2022): Upgrading from 12c to 19c the optimizer will stop unnest, ing some subqueries based on expressions.
- DCL (lateral decorrelation) (April 2021): You can use lateral views in 12c and above, but Oracle can “decorrelate” them and sometimes gets the wrong results. (On the plus side, subqueries can have correlation predicates more than one step down).
- Pivot cosmetics (Feb 2021): Upgrading from 11g – some reports using pivot may need cosmetic adjustments as a side effect of the legality of longer column names.
- Importing Statistics (April 2018): If you’re still using exp/imp after upgrading to 12c you should make the change or you may lose hybrid and top-frequency histograms on the import.
CreateOUIProcess(): 13 … Permission issue or Library issue?
Recently I have been doing some installs for Oracle GoldenGate via the silent install process; mostly using Ansible. Every once […]
The post CreateOUIProcess(): 13 … Permission issue or Library issue? appeared first on DBASolved.
Exchange Partition
A question appeared recently in a comment on a blog note I wrote about the new (in 12c) deferred global index maintenance that allowed the execution of a drop partition to return very quickly. The question was in two parts:
- Why is an exchange partition so slow by comparison?
- Is there any way to make it quick?
It occurred to me that it might be possible to do something for some cases by taking advantage of the partial indexing feature that also appeared in 12c. It wouldn’r eliminate the need for index maintenance, of course, and you clearly couldn’t avoid the insert maintenance for the incoming partition completely … but you might be able to ensure that it happens only at a time you find convenient.
There’s a little more detail in my reply to the comment, but this note is a place holder for the article I’ll write if I ever find time to test the idea.
Lob Space redux
At present this is just a place holder to remind me to finish commenting on (and correcting) a mistake I made when I wrote a note about the number of bytes of data you could get into an “enable storage in row” LOB before it had to be stored out of row.
ChatGPT vs DevOps
As a lot of curious people, I have experimented with ChatGPT in order to figure out if it could be helpful in my work as a DevOps consultant or even replace me (why not?). If you are also interested to see how ChatGPT compares with an Oracle DBA, you can check the very interesting blog of my colleague that did that assessment.
First I have to say that I was pleasantly surprised by the accuracy of the responses to basic questions regarding DevOps technology like Ansible, Kubernetes, Rook Ceph, Docker, GitLab… ChatGPT keeps the context of a chat so I usually start by asking general questions as Do you know Kubernetes? where I got a general but accurate response and from there I ask for more details about a component, a script or a command. Each time the response is quite accurate and you can challenge ChatGPT when you test a command and it doesn’t work, he is able to recognized he was wrong!
First contactMy first feeling is that ChatGPT is a great interactive encyclopedia on which you can quickly get information by asking questions and the intelligence I see is that it understands the questions quite well, provides accurate answers and keeps the context of the conversion in order to elaborate or give details and even correct itself. Of course you have to guide the responses by asking the right questions but for learning and to get you started I think it’s great as you can quickly gather information about a technology. In that sense it was faster than doing some research with a traditional search engine. However the drawback is that you get answers from a single source of truth and you lack the diversity of answers you get from various blogs and websites when you do an usual research on the Internet. So you will have to test those answers, cross check them and challenge ChatGPT a for better answer as it will also learn from you (don’t forget that if it’s free, you are the product!).
I’ve previously blogged about Kubernetes SSO using Azure AD and I was curious to know if ChatGPT could have helped me to find the same solution, another one or just found one quicker. Let’s have a look what ChatGPT has in store about it!
ChatGPT vs Kubernetes SSO using Azure AD What are the steps to implement SSO in Kubernetes with Azure?- Create an Azure Active Directory (AAD) tenant and register your application in the tenant.
- Create a Service Principal for the registered application and assign necessary permissions.
- Configure AAD as the identity provider in Kubernetes using the AAD integration in Kubernetes.
- Use the Service Principal credentials to authenticate to the Kubernetes cluster.
- Configure Role-based Access Control (RBAC) in Kubernetes to grant access to users and groups in AAD.
- Test SSO by logging in to the Kubernetes cluster using AAD credentials.
Note:
- The above steps are high level steps and there can be more details and steps involved depending on your use case.
- Also, you might need some Azure Kubernetes Service (AKS) related configurations as well.
The first response is already a good summary and the last note informs us that those are only high level steps. However the mention to AKS is not relevant here but he couldn’t have known as in my question I didn’t specify OnPrem Kubernetes or Azure Kubernetes.
Could you detail how to configure SSO in Kubernetes?- Create an Azure Active Directory (AAD) tenant if you do not have one already.
- Register your application in the tenant by going to the Azure portal, selecting Azure Active Directory, and then selecting App registrations.
- Create a Service Principal for the registered application and assign necessary permissions by going to the Azure portal, selecting Azure Active Directory, and then selecting App registrations.
- Configure AAD as the identity provider in Kubernetes by using the AAD integration in Kubernetes. This can be done by modifying the kube-apiserver manifest to include the –oidc-issuer-url, –oidc-client-id and –oidc-ca-file flags and pointing them to the AAD tenant.
- Use the Service Principal credentials to authenticate to the Kubernetes cluster by passing them as environment variables to the kubelet.
- Configure Role-based Access Control (RBAC) in Kubernetes to grant access to users and groups in AAD. This can be done by creating roles and role bindings in Kubernetes that map to users and groups in AAD.
- Test SSO by logging in to the Kubernetes cluster using AAD credentials by running kubectl commands with the –token option and providing a valid AAD token.
Step 4 for configuring kube-apiserver is added and ChatGPT gives a bit more details in the other steps in comparaison to my first question. This is all correct and quite good.
Could you show an example of kube-apiserver manifest?Here’s an example of a kube-apiserver manifest that includes the configuration for SSO using Azure Active Directory:
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
labels:
component: kube-apiserver
spec:
containers:
- name: kube-apiserver
image: k8s.gcr.io/kube-apiserver:v1.19.0
command:
- kube-apiserver
- --oidc-issuer-url=https://sts.windows.net/<AAD tenant ID>/
- --oidc-client-id=<client ID>
- --oidc-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-username-claim=email
- --oidc-groups-claim=groups
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --authorization-mode=RBAC
- --etcd-servers=https://etcd-0.etcd:2379,https://etcd-1.etcd:2379,https://etcd-2.etcd:2379
- --advertise-address=1.2.3.4
- --bind-address=0.0.0.0
- --secure-port=6443
volumeMounts:
- name: ca-certs
mountPath: /etc/kubernetes/pki
volumes:
- name: ca-certs
configMap:
name: kubernetes-pki
You can see that the kube-apiserver
container is configured with the --oidc-issuer-url
, --oidc-client-id
, and --oidc-ca-file
flags, which point to the AAD tenant and client ID, and the certificate authority file for the AAD tenant, respectively. The --oidc-username-claim
and --oidc-groups-claim
flags are used to map AAD user’s claims to the corresponding Kubernetes user and group.
Please note that this is just an example and your actual kube-apiserver manifest may differ depending on your specific environment.
Also, you need to pass the --token
option with a valid AAD token to kubectl command, in order to authenticate to the cluster
Not bad at all! ChatGPT generates an example of a kube-apiserver.yaml file and explains each parameters related to our SSO discussion. Let’s push a bit further:
Could you give also an example of .kube/config file to use with this kube-apiserver?Here’s an example of a .kube/config
file that can be used to authenticate to a Kubernetes cluster with SSO using Azure Active Directory:
apiVersion: v1
kind: Config
clusters:
- name: my-cluster
cluster:
server: https://kubernetes.default.svc
certificate-authority-data: <base64 encoded CA data>
insecure-skip-tls-verify: true
users:
- name: azure-user
user:
auth-provider:
name: azure
config:
tenant-id: <tenant id>
client-id: <client id>
client-secret: <client secret>
idp-issuer-url: https://sts.windows.net/<tenant id>/
idp-certificate-authority-data: <base64 encoded CA data>
idp-audience: https://kubernetes.default.svc
contexts:
- name: my-context
context:
cluster: my-cluster
user: azure-user
current-context: my-context
The above .kube/config
file defines a cluster named my-cluster
and an user named azure-user
. The auth-provider
field for the azure-user
is set to azure
and the config
field contains the necessary configuration for authenticating to AAD. The configuration includes the tenant ID, client ID, client secret, and the URL of the AAD tenant. It also includes the CA certificate data for the AAD tenant and the expected audience for the token.
You also need to set the current context as my-context
in order to use this configuration file.
Please note that this is just an example and your actual .kube/config
file may differ depending on your specific environment.
Not bad either! I like the note of warning at the end as well as the additional information about the context that I could dig deeper by asking specifically a question about it.
For those of you who have read my blog you will notice that the .kube/config
file is using the parameters of the deprecated method. ChatGPT is working on data up to 2021 so he may suggest that well-known solution first. As I know this topic well now, I can try to guide him to the direction of the newest method and see what will come out.
Here’s an example of a modified .kube/config
file that can be used to authenticate to a Kubernetes cluster with SSO using Azure Active Directory and the oidc-login
plugin of kubectl
:
apiVersion: v1
kind: Config
clusters:
- name: my-cluster
cluster:
server: https://kubernetes.default.svc
certificate-authority-data: <base64 encoded CA data>
insecure-skip-tls-verify: true
users:
- name: azure-user
user:
exec:
command: oidc-login
args:
- --issuer-url=https://sts.windows.net/<tenant id>/
- --client-id=<client id>
- --username-claim=email
- --groups-claim=groups
- --ca-file=path/to/ca.crt
- --extra-scopes=openid,email,groups
- --extra-audiences=https://kubernetes.default.svc
contexts:
- name: my-context
context:
cluster: my-cluster
user: azure-user
current-context: my-context
The above .kube/config
file defines a cluster named my-cluster
and an user named azure-user
. The exec
field for the azure-user
is set to oidc-login
and the args
field contains the necessary configuration for authenticating to AAD with the plugin. The configuration includes the issuer URL, client ID, username claim, groups claim, and ca file. Additionally, you can set extra scopes and audiences as well.
When you run kubectl
commands with this config, the plugin will automatically launch an OIDC flow and obtain an access token for the user.
Please note that this is just an example and your actual .kube/config
file may differ depending on your specific environment and the plugin version you use.
So ChatGPT knows this solution as well as is able to provide the required parameters. To reach that point I needed some knowledge myself in order to ask the question properly but anyway we can see that quickly we have the basis on which we can get started.
ChatGPT LimitsIf you’ve read my blogs (Part 1 and Part 2) about SSO, you know that we were looking for a solution to connect the oidc plugin with HTTPS to Azure that supports only this protocol. Let’s see if ChatGPT could have found a solution quicker than us:
Adjust oidc-login plugin azure-user to use httpsHere’s an example of how you can modify the azure-user
configuration in the .kube/config
file to use HTTPS with the oidc-login
plugin:
apiVersion: v1
kind: Config
clusters:
- name: my-cluster
cluster:
server: https://kubernetes.default.svc
certificate-authority-data: <base64 encoded CA data>
users:
- name: azure-user
user:
exec:
command: oidc-login
args:
- --issuer-url=https://sts.windows.net/<tenant id>/
- --client-id=<client id>
- --username-claim=email
- --groups-claim=groups
- --ca-file=path/to/ca.crt
- --extra-scopes=openid,email,groups
- --extra-audiences=https://kubernetes.default.svc
- --https
contexts:
- name: my-context
context:
cluster: my-cluster
user: azure-user
current-context: my-context
The above .kube/config
file defines a cluster named my-cluster
and an user named azure-user
. The exec
field for the azure-user
is set to oidc-login
and the args
field contains the necessary configuration for authenticating to AAD with the plugin. In the args field --https
is added to make the plugin use https for redirects and token exchange
When you run kubectl
commands with this config, the plugin will automatically launch an OIDC flow and obtain an access token for the user over HTTPS.
Please note that this is just an example and your actual .kube/config
file may differ depending on your specific environment and the plugin version you use.
Well that’s interesting because the –https parameter just doesn’t exist so let’s challenge that neuronal beast:
–https doesn’t workI apologize for any confusion, you are correct that the --https
flag is not a valid option for the oidc-login
plugin. It seems that this flag was included in my previous response by mistake.
To use HTTPS with the oidc-login
plugin, you can configure it to use a reverse proxy that supports HTTPS and set the issuer-url
and redirect-url
accordingly. Additionally, you can ensure that the ca-file
option points to a valid CA certificate.
It is also good to configure the endpoints in your Azure Active Directory tenant to use the https protocol.
I apologize for any confusion caused by my previous response, please let me know if you have any other questions.
Very polite indeed, I guess this is where ChatGPT is learning from his mistakes. He then points us to the direction of a reverse proxy but not to the parameters I was looking for (–local-server-cert and –local-server-key).
I’ve continued to ask for other solutions and other parameters to see if he knew them but he was running in a loop by suggesting other parameters that don’t exist. So I’ve finally given him the solution I was looking for:
What about the –local-server-cert and –local-server-key parameters?I apologize for my previous errors, you are correct that oidc-login
plugin supports https by using --local-server-cert
and --local-server-key
parameters. --local-server-cert
allows you to specify the location of the certificate file that should be used to establish an HTTPS connection with the OIDC provider. --local-server-key
allows you to specify the location of the private key file that should be used to establish an HTTPS connection with the OIDC provider.
OK! So in this case he was confirming I was correct (call me BenGPT). As you can see, after a certain depth from the basic core topic, you reach the limit of ChatGPT…for now at least.
ConclusionI come to the same conclusion as my colleague regarding Oracle DBA, ChatGPT will not replace a DevOps consultant yet as we can reach its limits when the topic becomes too complex or when we need a creative solution to an issue. In the first part of my SSO Blog, I needed to troubleshooting a secret parameter with or without ” ” and I think this could not have been found by ChatGPT at this stage.
However it has been announced that ChatGPT will be updated and will become much more powerful so we can expect that it will greatly improve and expand its limits. I will continue to use it as I am still amazed by its capacities to summarize information about a topic and can greatly help to get started on any topic I need to learn as a DevOps consultant. ChatGPT is just the beginning of a new era with AI and I see it as taking more and more importance in our daily work not as a replacement but as a valuable assistant.
L’article ChatGPT vs DevOps est apparu en premier sur dbi Blog.
Configure Oracle GoldenGate’s ServiceManager as a Linux Service
After installing Oracle GoldenGate with a manual ServiceManager, you realize that the ServiceManager will not come back up on […]
The post Configure Oracle GoldenGate’s ServiceManager as a Linux Service appeared first on DBASolved.
Pages
