Sylvain Guilbaud · Feb 18, 2016 go to post

Thank you Daniel.

Actually I've found another way to do it without modifying any % classes !

1. using Studio, create a class which inherits from "%DeepSee.UI.MDXExcel" ; ie : "MDX.export.cls"

2. overwrite in MDX.export.cls what you need to modify to fit your expected rendering in the appropriate methods : 

OnPage, %PrintListingResult, or %PrintResults ...

3. in your DeepSee dashboard,  

replace the  Export Excel parameter of your widget, corresponding to this line in our .dashboard.DFI definition : 

<property name="excel">1</property>

by a new navigate control which calls your MDX.export class :

<control name="" action="navigate" target="*" targetProperty="./MDX.export.zen?MDX=...mdx query..." location="widget" type="auto" controlClass="" label="Export Excel" title="" value="" text="" readOnly="false" valueList="" displayList="" activeWhen="">
  <valueRequired>false</valueRequired>
</control>

NB: to retrieve all the parameters of the targetProperty (MDX, FILTERNAMES, FILTERVALUES, TITLE, ... ) , you just have to first run the native Excel Export and then copy the link by using the Downloads view of your browser.

WARNING : the targetProperty length is limited to 250 characters

Sylvain Guilbaud · Mar 10, 2016 go to post

To export globals in XML format, use $system.OBJ.Export :

d $system.OBJ.Export("DeepSee.TermList.GBL","/data/TermList.xml")

Sylvain Guilbaud · Oct 5, 2016 go to post

Hi Evgeny, 

this code was written while upgrading to an async mirrored a DeepSee remote instance (originally based on a shadow server configuration + ECP access to ^OBJ.DSTIME global from DeepSee instance to production. It was before DSINTERVAL was created). 

Of course this sample can be modified to add/remove/modify any other parameter by modifying the query on %Dictionary.ParameterDefinition to filter any other parameter you are trying to add/remove/modify.

Sylvain Guilbaud · Nov 8, 2016 go to post

The alternative installation of Caché on the Mac OS X is much like the installation on any UNIX® platform.

To install Caché:
  1. Obtain the installation kit from InterSystems and install it on the desktop (tar.gz)
  2. Log in as user ID root. It is acceptable to su (superuser) to root while logged in from another account.
  3. See Adjustments for Large Number of Concurrent Processes and make adjustments if needed.
  4. Follow the instructions in the Run the Installation Script section and subsequent sections of the “Installing Caché on UNIX and Linux” chapter of this guide.
Sylvain Guilbaud · Sep 2, 2020 go to post

Hello @Robert Cemper,

thanks for your reply on this 5 year question surprise

My question was more on references using Data Connectors on production.

We can update cubes based on external tables using data connectors through ProcessFact().

Sinon, je vais très bien, je te remercie et j'espère qu'il en est de même pour toi. En voyant ton infatigable activité, je devine que tu vas bien.

Salutations de France,

Sylvain

Sylvain Guilbaud · Nov 22, 2021 go to post

That's a really significant milestone. Congrats !!!

10K in less than 6 years, means an approx rate of 140 new members each month.
I'm confident that it will take less than 6 years to reach the next 10K members.

Sylvain Guilbaud · Feb 11, 2022 go to post

Did you try to pull the containers after having been first logged in successfully ?

echo $PASSWORD | docker login -u=your-login --password-stdin containers.intersystems.com
docker pull containers.intersystems.com/intersystems/iris:2022.1.0.114.0
Sylvain Guilbaud · Feb 19, 2022 go to post

Thanks for sharing this explanation.

If you want to avoid to add the WITH clause in all your DDL statement, you can also modify this default behavior by using :

SET status=$SYSTEM.SQL.Util.SetOption("DDLUseExtentSet",0,.oldval)
Sylvain Guilbaud · Feb 22, 2022 go to post

Thanks Eduard for sharing your code implementing a very powerful approach of data snapshot.
 

Sylvain Guilbaud · Feb 22, 2022 go to post

Thanks Robert for your comment.
 

Merging globals is exactly what the toArchive method does here :


 


Class data.archive.person Extends (%Persistent, data.current.person)
{

Parameter DEFAULTGLOBAL = "^off.person";

/// Description ClassMethod archive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc = $$$OK , tableName = "" set (archived,archivedErrors, severity) = 0

set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2)
set targetClassName = ..%ClassName(1)

set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) 
set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName)

set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation
set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation
set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation

set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation
set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation
set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation

set tableName = $$$CLASSsqlschemaname($$$gWRK,sourceClassName) _"."_  $$$CLASSsqltablename($$$gWRK,sourceClassName)

if $ISOBJECT(sourceClass) 
 &amp; $ISOBJECT(targetClass)
 &amp; tableName '= "" {
    if $ISOBJECT(sourceClass.Storages.GetAt(1)) 
     &amp; $ISOBJECT(targetClass.Storages.GetAt(1))
     {
        set tStatement=##class(%SQL.Statement).%New(1) 
        kill sql
        set sql($i(sql)) = "SELECT" 
        set sql($i(sql)) = "id"  
        set sql($i(sql)) = "FROM"
        set sql($i(sql)) = tableName
        set sc = tStatement.%Prepare(.sql) 
        set result = tStatement.%Execute()

        kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation 

        while result.%Next() {
            set source = $CLASSMETHOD(sourceClassName,"%OpenId",result.%Get("id"))

            if $ISOBJECT(source) {
                set archive = $CLASSMETHOD(targetClassName,"%New")

                for i = 1:1:sourceClass.Properties.Count() {
                    set propertyName = sourceClass.Properties.GetAt(i).Name
                    set $PROPERTY(archive,propertyName) = $PROPERTY(source,propertyName)
                }

                set sc = archive.%Save()
                if sc {
                    set archived = archived + 1
                } else {
                    set archivedErrors = archivedErrors + 1
                }
            }
        }

        kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation

        set msg ="archive data from " _ sourceClassName _ " into "_ targetClassName _ " result:" _ archived _ " archived (errors:" _ archivedErrors _ ")"

   } else {
        set severity = 1
        set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition"
    }
} else {
    set severity = 1
    set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition"
}
do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity)
Return sc

}

ClassMethod toArchive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc=$$$OK

set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2)
set targetClassName = ..%ClassName(1)
set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) 
set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName)

if $ISOBJECT(sourceClass) 
 &amp; $ISOBJECT(targetClass) {
    if $ISOBJECT(sourceClass.Storages.GetAt(1)) 
     &amp; $ISOBJECT(targetClass.Storages.GetAt(1))
     {

        set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation
        set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation
        set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation

        set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation
        set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation
        set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation

        kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation 

        merge @targetDataLocation = @sourceDataLocation
        merge @targetIndexLocation = @sourceIndexLocation
        merge @targetStreamLocation = @sourceStreamLocation

        set ^mergeTrace($i(^mergeTrace)) = $lb($zdt($h,3),sourceDataLocation)

        kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation

        set severity = 0
        set msg = "ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " SUCCESSFULLY"
                

    } else {
        set severity = 1
        set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition"
    }
} else {
    set severity = 1
    set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition"
}
do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity)
return sc

}

Storage Default { %%CLASSNAMEnamedobactivcreated^off.personDpersonDefaultData^off.personD^off.personI^off.personS%Storage.Persistent }

}

Sylvain Guilbaud · Feb 25, 2022 go to post

Thanks Dmitry for your reply.

Actually, I know all of this ; that's why I don't understand why it's not working any more...

  1. gh repo clone intersystems-community/sam
  2. cd sam
  3. tar xvzf sam-1.0.0.115-unix.tar.gz
  4. cd sam-1.0.0.115-unix
  5. ./start.sh

Then I create a cluster + a target on my local instance (non-container) :

iris list irishealth                

Configuration 'IRISHEALTH'

directory:    /Users/guilbaud/is/irishealth

versionid:    2021.2.0.649.0

datadir:      /Users/guilbaud/is/irishealth

conf file:    iris.cpf  (SuperServer port = 61773, WebServer = 52773)

status:       running, since Fri Feb 25 15:35:32 2022

state:        ok

product:      InterSystems IRISHealth

I check that /api/monitor/metrics runs well :

curl http://127.0.0.1:52773/api/monitor/metrics -o metrics

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 17634  100 17634    0     0  14174      0  0:00:01  0:00:01 --:--:-- 14383

 

Sylvain Guilbaud · Feb 25, 2022 go to post

For containers, I'm using docker-compose.

I've tried with SAM in the same yml file to get everything in the same network, but, nothing is working (0.0.0.0, etc.)

Sylvain Guilbaud · Feb 25, 2022 go to post

After a reboot I'm now again able to reach 1 local instance (out of 2) and 0 containers + the IRIS-SAM instance.

 

 

docker-compose.yml

version: '3.7'

networks:

dockernet:

ipam:

driver: default

config:

- subnet: 172.19.0.0/24

services:

arbiter:

image: containers.intersystems.com/intersystems/arbiter:2022.1.0.131.0

init: true

command:

- /usr/local/etc/irissys/startISCAgent.sh 2188

hostname: arbiter

container_name: arbiter

ports:

- 50100:2188

networks:

dockernet:

ipv4_address: 172.19.0.100

iris-a:

init: true

build:

context: .

image: iris:2022.1.0.114.0

hostname: iris-a

container_name: iris-a

environment:

- ISC_DATA_DIRECTORY=/InterSystems

volumes:

- ./data:/data

- ./volumes/InterSystems:/InterSystems

- ./keys/iris.key:/usr/irissys/mgr/iris.key

ports:

- 50004:52773

- 50005:1972

networks:

dockernet:

ipv4_address: 172.19.0.10

iris-b:

init: true

build:

context: .

image: iris:2022.1.0.114.0

hostname: iris-b

container_name: iris-b

environment:

- ISC_DATA_DIRECTORY=/InterSystems

volumes:

- ./data:/data

- ./volumes/InterSystems-b:/InterSystems

- ./keys/iris.key:/usr/irissys/mgr/iris.key

ports:

- 50014:52773

- 50015:1972

networks:

dockernet:

ipv4_address: 172.19.0.20

webgateway:

hostname: webgateway

container_name: webgateway

depends_on:

- iris-a

- iris-b

- arbiter

image: containers.intersystems.com/intersystems/webgateway:2022.1.0.131.0

ports:

- 50243:443

- 50200:80

environment:

- ISC_DATA_DIRECTORY=/webgateway

- IRIS_USER=CSPsystem

- IRIS_PASSWORD=SYS

networks:

dockernet:

ipv4_address: 172.19.0.200

volumes:

- "./volumes/webgateway:/webgateway"

postgres:

container_name: postgres

image: postgres:13.4-alpine3.14

environment:

POSTGRES_PASSWORD: postgres

volumes:

- ./src/sql/postgreSQL:/docker-entrypoint-initdb.d/

- ./volumes/postgreSQL:/var/lib/postgresql/data

ports:

- 50006:5432

restart: unless-stopped

healthcheck:

test: ["CMD", "pg_isready", "-U", "postgres"]

interval: 30s

timeout: 30s

retries: 3

networks:

dockernet:

ipv4_address: 172.19.0.11

mssql:

container_name: mssql

image: 'mcr.microsoft.com/mssql/server:2019-latest'

ports:

- '50007:1433'

environment:

- ACCEPT_EULA=Y

- SA_PASSWORD=Secret1234

volumes:

- './volumes/mssql:/var/opt/mssql'

networks:

dockernet:

ipv4_address: 172.19.0.12

sam-alertmanager:

container_name: sam-alertmanager

command:

- --config.file=/config/isc_alertmanager.yml

- --data.retention=24h

- --cluster.listen-address=

depends_on:

- sam-iris

- sam-prometheus

expose:

- '9093'

image: prom/alertmanager:v0.20.0

restart: on-failure

volumes:

- ./sam/config/alertmanager:/config

networks:

- dockernet

sam-grafana:

container_name: sam-grafana

depends_on:

- sam-prometheus

expose:

- '3000'

image: grafana/grafana:6.7.1

restart: on-failure

volumes:

- ./sam/data/grafana:/var/lib/grafana

- ./sam/config/grafana/grafana.ini:/etc/grafana/grafana.ini

- ./sam/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml

- ./sam/config/grafana/dashboard-provider.yml:/etc/grafana/provisioning/dashboards/dashboard-provider.yml

- ./sam/config/grafana/dashboard.json:/var/lib/grafana/dashboards/dashboard.json

networks:

- dockernet

sam-iris:

container_name: sam-iris

environment:

- ISC_DATA_DIRECTORY=/dur/iconfig

expose:

- '51773'

- '52773'

hostname: IRIS

image: store/intersystems/sam:1.0.0.115

init: true

restart: on-failure

volumes:

- ./sam/data/iris:/dur

- ./sam/config:/config

networks:

- dockernet

sam-nginx:

container_name: sam-nginx

depends_on:

- sam-iris

- sam-prometheus

- sam-grafana

image: nginx:1.17.9-alpine

ports:

- 8080:8080

restart: on-failure

volumes:

- ./sam/config/nginx/nginx.conf:/etc/nginx/nginx.conf

networks:

- dockernet

sam-prometheus:

container_name: sam-prometheus

command:

- --web.enable-lifecycle

- --config.file=/config/isc_prometheus.yml

- --storage.tsdb.retention.time=2h

networks:

- dockernet

depends_on:

- sam-iris

expose:

- '9090'

image: prom/prometheus:v2.17.1

restart: on-failure

volumes:

- ./sam/config/prometheus:/config

# openldap:

# image: bitnami/openldap:2

# ports:

# - '50008:1389'

# - '50009:1636'

# environment:

# - LDAP_ADMIN_USERNAME=admin

# - LDAP_ADMIN_PASSWORD=adminpassword

# - LDAP_USERS=user01,user02

# - LDAP_PASSWORDS=password1,password2

# volumes:

# - ./volumes/openldap_data:/bitnami/openldap

# networks:

# dockernet:

# ipv4_address: 172.19.0.172

 

docker ps

docker ps

CONTAINER ID   IMAGE                                                                COMMAND                  CREATED              STATUS                                 PORTS                                                                               NAMES

5ee1a03673cb   nginx:1.17.9-alpine                                                  "nginx -g 'daemon of…"   About a minute ago   Up About a minute                      80/tcp, 0.0.0.0:8080->8080/tcp                                                      sam-nginx

cce6e64a4fd6   grafana/grafana:6.7.1                                                "/run.sh"                About a minute ago   Up About a minute                      3000/tcp                                                                            sam-grafana

bbadfa5c326e   prom/alertmanager:v0.20.0                                            "/bin/alertmanager -…"   About a minute ago   Up About a minute                      9093/tcp                                                                            sam-alertmanager

89df4b965a3b   containers.intersystems.com/intersystems/webgateway:2022.1.0.131.0   "/startWebGateway"       About a minute ago   Up About a minute (healthy)            0.0.0.0:50200->80/tcp, 0.0.0.0:50243->443/tcp                                       webgateway

260b51880b60   prom/prometheus:v2.17.1                                              "/bin/prometheus --w…"   About a minute ago   Up About a minute                      9090/tcp                                                                            sam-prometheus

961b84b9ff4a   iris:2022.1.0.114.0                                                  "/iris-main"             About a minute ago   Up About a minute (health: starting)   2188/tcp, 53773/tcp, 54773/tcp, 0.0.0.0:50015->1972/tcp, 0.0.0.0:50014->52773/tcp   iris-b

b0c23098794a   postgres:13.4-alpine3.14                                             "docker-entrypoint.s…"   About a minute ago   Up About a minute (healthy)            0.0.0.0:50006->5432/tcp                                                             postgres

4cb33c8c36e3   store/intersystems/sam:1.0.0.115                                     "/iris-main"             About a minute ago   Up About a minute (healthy)            2188/tcp, 51773/tcp, 52773/tcp, 53773/tcp, 54773/tcp                                sam-iris

54e117a4e856   iris:2022.1.0.114.0                                                  "/iris-main"             About a minute ago   Up About a minute (healthy)            2188/tcp, 53773/tcp, 54773/tcp, 0.0.0.0:50005->1972/tcp, 0.0.0.0:50004->52773/tcp   iris-a

2c5133d95693   containers.intersystems.com/intersystems/arbiter:2022.1.0.131.0      "/arbiterEntryPoint.…"   About a minute ago   Up About a minute (healthy)            0.0.0.0:50100->2188/tcp                                                             arbiter

307e59e12fdc   mcr.microsoft.com/mssql/server:2019-latest                           "/opt/mssql/bin/perm…"   About a minute ago   Up About a minute                      0.0.0.0:50007->1433/tcp                                                             mssql
 

 isc_prometheus.yml

Sylvain Guilbaud · Feb 25, 2022 go to post

I'm running the very last version of Docker : 4.5.0 (74594)

 

 docker version

Client:

Cloud integration: v1.0.22

Version:           20.10.12

API version:       1.41

Go version:        go1.16.12

Git commit:        e91ed57

Built:             Mon Dec 13 11:46:56 2021

OS/Arch:           darwin/amd64

Context:           default

Experimental:      true

Server: Docker Desktop 4.5.0 (74594)

Engine:

  Version:          20.10.12

  API version:      1.41 (minimum version 1.12)

  Go version:       go1.16.12

  Git commit:       459d0df

  Built:            Mon Dec 13 11:43:56 2021

  OS/Arch:          linux/amd64

  Experimental:     true

containerd:

  Version:          1.4.12

  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d

runc:

  Version:          1.0.2

  GitCommit:        v1.0.2-0-g52b36a2

docker-init:

  Version:          0.19.0

  GitCommit:        de40ad0

Sylvain Guilbaud · Feb 28, 2022 go to post

ping and wget works well from SAM containers (tested from prometheus or nginx containers) ; I'm able to reach each IRIS instances (containers and non-containers).

But from the SAM UI, I'm still facing unreachable errors.
 

  

Sylvain Guilbaud · Feb 28, 2022 go to post
docker exec -ti -u root sam-prometheus-1 sh

/prometheus # wget http://172.20.10.3:50004/api/monitor/metrics -O iris-a-metrics

Connecting to 172.20.10.3:50004 (172.20.10.3:50004)

saving to 'iris-a-metrics'

iris-a-metrics       100% |*************************************************************************************************|  8584  0:00:00 ETA

'iris-a-metrics' saved

/prometheus # wget http://172.20.10.3:50014/api/monitor/metrics -O iris-b-metrics

Connecting to 172.20.10.3:50014 (172.20.10.3:50014)

saving to 'iris-b-metrics'

iris-b-metrics       100% |*************************************************************************************************|  7901  0:00:00 ETA

'iris-b-metrics' saved

/prometheus # wget http://172.20.10.3:52773/api/monitor/metrics -O iris-health-metrics

Connecting to 172.20.10.3:52773 (172.20.10.3:52773)

saving to 'iris-health-metrics'

iris-health-metrics  100% |*************************************************************************************************| 24567  0:00:00 ETA

'iris-health-metrics' saved

/prometheus # wget http://172.20.10.3:52774/api/monitor/metrics -O iris-metrics

Connecting to 172.20.10.3:52774 (172.20.10.3:52774)

saving to 'iris-metrics'

iris-metrics         100% |*************************************************************************************************|  6110  0:00:00 ETA

'iris-metrics' saved
Sylvain Guilbaud · Feb 28, 2022 go to post
docker logs sam-prometheus-1

level=info ts=2022-02-28T16:14:51.209Z caller=main.go:333 msg="Starting Prometheus" version="(version=2.17.1, branch=HEAD, revision=ae041f97cfc6f43494bed65ec4ea4e3a0cf2ac69)"

level=info ts=2022-02-28T16:14:51.210Z caller=main.go:334 build_context="(go=go1.13.9, user=root@806b02dfe114, date=20200326-16:18:19)"

level=info ts=2022-02-28T16:14:51.210Z caller=main.go:335 host_details="(Linux 5.10.76-linuxkit #1 SMP Mon Nov 8 10:21:19 UTC 2021 x86_64 aa300025820c (none))"

level=info ts=2022-02-28T16:14:51.210Z caller=main.go:336 fd_limits="(soft=1048576, hard=1048576)"

level=info ts=2022-02-28T16:14:51.210Z caller=main.go:337 vm_limits="(soft=unlimited, hard=unlimited)"

level=info ts=2022-02-28T16:14:51.215Z caller=web.go:514 component=web msg="Start listening for connections" address=0.0.0.0:9090

level=info ts=2022-02-28T16:14:51.214Z caller=main.go:667 msg="Starting TSDB ..."

level=info ts=2022-02-28T16:14:51.225Z caller=head.go:575 component=tsdb msg="replaying WAL, this may take awhile"

level=info ts=2022-02-28T16:14:51.227Z caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0

level=info ts=2022-02-28T16:14:51.227Z caller=head.go:627 component=tsdb msg="WAL replay completed" duration=1.4381ms

level=info ts=2022-02-28T16:14:51.228Z caller=main.go:683 fs_type=EXT4_SUPER_MAGIC

level=info ts=2022-02-28T16:14:51.228Z caller=main.go:684 msg="TSDB started"

level=info ts=2022-02-28T16:14:51.228Z caller=main.go:788 msg="Loading configuration file" filename=/config/isc_prometheus.yml

ts=2022-02-28T16:14:51.234Z caller=dedupe.go:112 component=remote level=info remote_name=43bf89 url=http://iris:52773/api/sam/private/db/write msg="starting WAL watcher" queue=43bf89

ts=2022-02-28T16:14:51.234Z caller=dedupe.go:112 component=remote level=info remote_name=43bf89 url=http://iris:52773/api/sam/private/db/write msg="replaying WAL" queue=43bf89

level=info ts=2022-02-28T16:14:51.239Z caller=main.go:816 msg="Completed loading of configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:14:51.239Z caller=main.go:635 msg="Server is ready to receive web requests."

ts=2022-02-28T16:14:59.846Z caller=dedupe.go:112 component=remote level=info remote_name=43bf89 url=http://iris:52773/api/sam/private/db/write msg="done replaying WAL" duration=8.6116179s

level=info ts=2022-02-28T16:36:46.583Z caller=main.go:788 msg="Loading configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:36:46.592Z caller=main.go:816 msg="Completed loading of configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:14.966Z caller=main.go:788 msg="Loading configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:14.974Z caller=main.go:816 msg="Completed loading of configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:37.074Z caller=main.go:788 msg="Loading configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:37.081Z caller=main.go:816 msg="Completed loading of configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:53.001Z caller=main.go:788 msg="Loading configuration file" filename=/config/isc_prometheus.yml

level=info ts=2022-02-28T16:37:53.007Z caller=main.go:816 msg="Completed loading of configuration file" filename=/config/isc_prometheus.yml