Ok, strange. Maybe you could try to run the core simulator Docker image as well on the same server and see if you can start the setup from there. If this works it must be something in you network/firewall that prevents websocket connections to the remote.
If you are running in docker, you need to connect via http://{docker-host-ip}:8088
That should work just fine
I didn’t even realize until just now, Cockpit runs on port 9090.
Edit— Geezus, map a different port and it works perfectly ![]()
THIS is the answer! Thank you @JackPowell!!!
The default integration port is 9090. On the remote it auto increments by one with each added integration. When running in docker, you are in charge of assigning different ports.
The manager which spins up a web server of its own by default runs on port 8088. This is different than the integration which is running on 9090 (by default) or whatever you set UC_INTEGRATION_HTTP_PORT to.
I had a mistake in my readme where I set the environment variable for the manager port to 9090 which caused some confusion earlier. It has been corrected.
The take away is if you run anymore integrations in your docker environment, make sure you set a unique UC_INTEGRATION_HTTP_PORT
I wasn’t aware of the incremental increase. I assumed the port 9090 or in my case 8090 (now) it was just allowing the remote to access any of the integrations.
It’s a good learning experience for me. Appreciate your help.
Ya wanna fix my esphome Bluetooth trackers now too. LoL
kidding. Sort of.
Thanks again.
I got the same problem.
I uploaded the integration manager to my remote3 and started the setup.
At the first time the remote was detected with the correct address (192.168.168.37).
I entred the PIN and placed the RC in a dock to charge.
I then typed in http://192.168.168.37:8088 on my Macbook and got the message “Safari can not connect to the server…”.
I ran the setup again and then again tried to connect. No joy!
I un- and reinstalled the integration manager and ran the setup again. This time it detected the remote on address 127.0.0.1 ???
To make it short: whatever I did, nothing helped.
Any ideas?
Andreas
Both IP addresses are correct. The first is the external and 127.0.0.1 is the internal one. To call the manager from a PC only the 1st is the correct one.
A log might give more information. On discord the author is present.
Ralf
Interestingly, after resetting my remote3 to factory setting, restoring a backup from an earlier setup (without the Integration Manger) and installing the IM again, did the job.
I can now accsess the manager correctly.
@JackPowell I have found that if the docker container is dropped and rebuilt from the same docker compose, the UI no longer works. The only way I have found to correct this is to remove the integration and re-add it to the remote.
The remote shows that it is connected to the integration, and the integration shows the remote connecting in the logs. But the web server never starts back up. Does this sound correct to you?
Logs:
DEBUG:ucapi.api:Publishing driver: name=intg_manager_driver._uc-integration._tcp.local., host=docker-desktop.local.:9090
INFO:ucapi.api:Driver is up: intg_manager_driver, version: 1.2.1, api: 0.5.1, listening on: 0.0.0.0:9090
INFO:ucapi.api:WS: Client added: (‘127.0.0.1’, 62110)
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] ->: {‘kind’: ‘resp’, ‘req_id’: 0, ‘code’: 200, ‘msg’: <WsMessages.AUTHENTICATION: ‘authentication’>, ‘msg_data’: {}}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] <-: {“id”:10,“kind”:“req”,“msg”:“get_driver_version”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] ->: {‘kind’: ‘resp’, ‘req_id’: 10, ‘code’: 200, ‘msg’: <WsMsgEvents.DRIVER_VERSION: ‘driver_version’>, ‘msg_data’: {‘name’: ‘Integration Manager’, ‘version’: {‘api’: ‘0.20.0’, ‘driver’: ‘1.2.1’}}}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] <-: {“id”:11,“kind”:“event”,“msg”:“connect”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] =>: {‘kind’: ‘event’, ‘msg’: <WsMsgEvents.DEVICE_STATE: ‘device_state’>, ‘msg_data’: {‘state’: <DeviceStates.CONNECTED: ‘CONNECTED’>}, ‘cat’: <EventCategory.DEVICE: ‘DEVICE’>}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] <-: {“id”:12,“kind”:“req”,“msg”:“get_entity_states”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 62110)] ->: {‘kind’: ‘resp’, ‘req_id’: 12, ‘code’: 200, ‘msg’: <WsMsgEvents.ENTITY_STATES: ‘entity_states’>, ‘msg_data’: }
INFO:ucapi.api:[(‘127.0.0.1’, 62110)] WS: Client removed
INFO:ucapi.api:WS: Client added: (‘127.0.0.1’, 58262)
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] ->: {‘kind’: ‘resp’, ‘req_id’: 0, ‘code’: 200, ‘msg’: <WsMessages.AUTHENTICATION: ‘authentication’>, ‘msg_data’: {}}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] <-: {“id”:1,“kind”:“req”,“msg”:“get_driver_version”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] ->: {‘kind’: ‘resp’, ‘req_id’: 1, ‘code’: 200, ‘msg’: <WsMsgEvents.DRIVER_VERSION: ‘driver_version’>, ‘msg_data’: {‘name’: ‘Integration Manager’, ‘version’: {‘api’: ‘0.20.0’, ‘driver’: ‘1.2.1’}}}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] <-: {“id”:2,“kind”:“event”,“msg”:“connect”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] =>: {‘kind’: ‘event’, ‘msg’: <WsMsgEvents.DEVICE_STATE: ‘device_state’>, ‘msg_data’: {‘state’: <DeviceStates.CONNECTED: ‘CONNECTED’>}, ‘cat’: <EventCategory.DEVICE: ‘DEVICE’>}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] <-: {“id”:3,“kind”:“req”,“msg”:“get_entity_states”}
DEBUG:ucapi.api:[(‘127.0.0.1’, 58262)] ->: {‘kind’: ‘resp’, ‘req_id’: 3, ‘code’: 200, ‘msg’: <WsMsgEvents.ENTITY_STATES: ‘entity_states’>, ‘msg_data’: }
And I just released version 1.3.2 with some nice new features too. If you have problems upgrading send me some logs and I’ll take a look
It doesn’t. I’ll try to replicate. What version were you running? I fixed a similar sounding problem in 1.2.1. I just released 1.3.2 about an hour ago
I was on 1.2.1 and just upgraded to 1.3.2. Unfortunately, it is still failing when the container is dropped and rebuilt. Sorry to be such a pain!
Thank you very much for this integration.
I’ve updated the docker commands in the readme to address the configuration being lost after an upgrade. This only applies to those of you running the integration manager in docker.
Basically, just make sure you set an environment variable UC_CONFIG_HOME
Will check it out tomorrow after watchtower does my updates.
I did get notification from ha that there is a new 1.3.4 version so that’s working well.
Thanks @JackPowell! This is a great step forward! I was able to drop the container and rebuild it. It came back up and I was able to connect. The only issue now is that it did not retain my settings. Such as my Home Assistant URL/token and anything under the Settings menu.
Correction: Configurations on Settings tab DID save. But the HA notification settings did not.
So there are currently two config/setting files. One is to establish the connection with the remote. This is config.json and is being saved properly if you have the correct env variable set. Cool. The other are the manager settings. This includes the settings page, any backups you have taken of integrations, and now your notification settings. These you can currently only export and then re-import to restore. This was done because if the integration is installed on the remote, there is no persistent storage. But I did overlook the docker angle and I need to write this out to your /config directory too. I’ll update this in the next release, but don’t forget to export the settings before the upgrade since the current code isn’t doing it for you.
@JackPowell I don’t think enough people have said this yet - You are doing some AMAZING work and it is very much appreciated! THANK YOU!!! And happy New Year!