I’m trying to implement the FYTA API integration for the Unfolded Circle remote control and would appreciate some guidance on how to get started.
I’ve been looking at the Home Assistant FYTA integration implementation to understand how they handle the API communication and data structures. However, I’m not quite sure how to translate this into a working integration for the remote control.
Some specific questions:
What would be the best approach to structure the integration?
How should we handle the API authentication and data polling?
Are there any specific patterns or practices I should follow for the remote control integration?
Has anyone here worked with the FYTA API before or built similar integrations for the remote control? Any tips or pointers would be greatly appreciated!
I’ve been working on a Node.js integration for FYTA plant sensors with the Remote Two, and I’ve made some progress but I’m stuck with packaging issues. I’ve shared my code on GitHub in hopes that someone with more experience might be able to help.
Repository
What I’ve Done So Far
I initially tried to build the integration in Rust as suggested in the forum, but I ran into some challenges with the FYTA API (no suitable Rust library, and the API requires specific handling). So I switched to Node.js using the integration-node-library, which has been easier to work with for the API integration.
I’ve implemented:
A FYTA API client that handles authentication and fetching sensor data (based on my understanding of their API, but not yet tested with real credentials)
An entity manager that creates and updates sensor entities
The main integration driver with setup flow for credentials
Where I’m Stuck
I’m having trouble with the package structure. When I try to upload my integration package to the Remote Two simulator, I keep getting a “Binary directory not found” error. I’ve tried multiple package structures:
Simple structure with all files in the root
Node.js structure with files in bin/ directory
Native-like structure with a shell script as the binary
None of these approaches have worked. The simulator always rejects the package with the same error.
My Questions
Has anyone successfully uploaded a custom Node.js integration to the Remote Two simulator? If so, could you share your package structure?
Is there something specific about the “Binary directory not found” error that I’m missing? The documentation mentions different structures for native and Node.js integrations, but I’m not sure if I’m following it correctly.
Are there any working examples of Node.js integration packages that I could look at?
I’d really appreciate any help or guidance. Thanks in advance!
Dear @kennymc.c, Thank you for sharing those helpful resources! I’ve successfully implemented the package structure based on the documentation and the Roon integration example, and I can now upload the integration to the Remote Two.
Here’s what I’ve learned and accomplished:
I followed the structure outlined in the driver-installation.md documentation, placing the driver.json in the root and all Node.js files in the bin directory.
The Roon integration’s build workflow was particularly helpful in understanding how to structure a Node.js integration. I noticed they rename their index.js to driver.js and make it executable, which was a key insight.
I’ve successfully uploaded the package to the Remote Two simulator (shown in the video) https://vimeo.com/1065520021
However, I’m still facing an issue with the configuration interface. As you can see in the video, when I click on the integration to configure it, I see this empty screen with just a “Next” button.
The setup_data_schema in my driver.json appears to be recognized (since the + button appears), but the form fields aren’t being displayed properly. I’ve tried various formats for the schema but haven’t found the right combination yet.
My next steps are to:
Investigate the correct setup_data_schema format by examining the Roon and Global Caché integrations more closely
Fix the FYTA API client to properly authenticate with real credentials
Implement proper entity creation and management
If anyone has experience with the setup_data_schema format for Remote Two integrations, I’d greatly appreciate your insights!
You’re driver.json seems to be missing the settings array and also has some unknown objects like properties and required. Unfortunately the setup_data_schema is not very well documented but you can compare your driver.json with others integrations or examples from the different APIs/libraries. One of my own integrations is also using a number field with restricted input: ucr2-integration-requests/intg-requests/setup.py at f7c1be75663a35b673a66e8d1a5ab55079986e59 · kennymc-c/ucr2-integration-requests · GitHub
If you encounter more problems sometimes you also get a hint while monitoring the core log.
It’s also a good idea to put all fields that require user input to a second page that can be dynamically generated by the integration driver. Otherwise all fields would be empty again if the user starts the integration setup again for reconfiguration because the driver.json doesn’t get reload until you remove the driver from the remote and install it again.
Thank you @kennymc.c, for you feedback! I’ve made progress with the FYTA integration:
The integration now uploads correctly to the Remote Two, but I’m still encountering the connection issue (error 111).
After examining your suggestion about the driver.json file, I’ve implemented a two-stage setup process:
The initial screen in driver.json only has a simple checkbox to proceed
The actual credentials form is generated dynamically by the driver in response to the DriverSetup event
I’ve also ensured the WebSocket server initializes properly with the IntegrationAPI, handling all required events (Connect, Disconnect, GetDriverVersion, etc.).
The current blocker is that the Remote Two can’t connect to the integration’s WebSocket server after installation. I’ve verified the port (9090) is specified correctly in both files, but I’m still investigating if there’s an environment-specific issue or if I’m missing something in the initialization process.
Has you or anyone else encountered similar connection issues with custom integrations? Any suggestions on debugging the WebSocket connection would be greatly appreciated!
After examining the logs, I’ve discovered that the simulator doesn’t actually run custom integration code in a sandbox environment like the real Remote Two device would. I see messages like:
WARN remote_core::actors::system::command: StartCustomIntegration not supported in Simulator
And connection errors:
ERROR remote_core::intg::external::handler::integration_driver: [fyta_plant_monitor] Error connecting to driver ws://127.0.0.1:9001: ServiceUnavailable("Failed to connect to host: Connection refused (os error 111)")
From what I understand, the simulator expects my integration to be running externally on port 9001, but doesn’t actually start it.
My questions:
Is this the intended workflow for testing custom integrations with the simulator? Do I need to run my Node.js driver externally on port 9001 while the simulator tries to connect to it?
Are there any specific configuration changes needed in my integration code when running it externally versus how it would run on the actual device?
Is there documentation that explains this development workflow in more detail? I may have missed it in the existing docs.
Are there any best practices or tips for efficiently testing custom integrations given this limitation of the simulator?
I appreciate any clarification on this matter. It would be very helpful to understand the proper development workflow when using the simulator for testing custom integrations.
Sorry, I must have missed that you’re using the simulator and not the real remote hardware. Yes, custom integrations are not working with the simulator.
If you don’t have a Remote yet you can also use external integrations with the simulator. It’s a bit different adding them to the remote especially when using the Docker image that doesn’t support mDNS discovery. The integration itself can remain the same.
When using the normal remote hardware the web sockets server that the integration starts can run on any port any server that is in the same network as the remote. The integration only has to advertise the port in the driver.json. For external integrations this data also includes the driver url.
While you can use mDNS on the remote hardware to discover new integrations in the web configurator under entities/add new you have to use the Core API to manually add the integration to the simulator.
You can find a documentation of the REST Core API under simulator-ip/doc/core-rest/.
I’m using a shell script that automatically adds the driver url and register the integration:
I’ve been trying to run my FYTA integration with the simulator and I’m facing some connection issues. According to your post, custom integrations don’t work with the simulator, so I’m trying to use an external integration approach as you suggested.
Here’s what I’ve tried so far:
I have the FYTA service running on my Mac at port 9001, and I can see it’s running and waiting for connections.
I configured the integration in driver.json to use my Mac’s IP address (192.168.68.57:9001) and packaged it.
When I try to connect from the simulator, I get connection errors in the logs showing it’s trying to connect to “ws://127.0.0.1:9001” instead of using my Mac’s IP address from the driver.json file:
Error connecting to driver ws://127.0.0.1:9001: ServiceUnavailable("Failed to connect to host: Connection refused (os error 111)")
I tried using “ws://host.docker.internal:9001” in the driver.json file instead, but the simulator still attempts to connect to 127.0.0.1.
I also tried enabling network_mode: “host” in the docker-compose.yml file, but then I couldn’t connect to the web configurator anymore.
I’m a bit confused about what’s wrong with my setup. Is this an issue with my configuration, or is there something fundamental about the docker-compose setup that I’m missing?
I’ve pushed my latest code with the packaged tar.gz in the package folder if that helps.
Do you have any ideas on what I should try next to get the simulator to connect to my integration running on my Mac?
Have you registered the driver to the remote simulator using the core api like I describe in my previous post? You can see in my script, it’s a quite simple POST request to the api/intg/drivers endpoint with json payload that includes the driver metadata. You should get a response from the api and see the integration in the web configurator if the registration was successful.
For me it looks like you have just uploaded a custom integration again where you added the driver_ur object to your driver.json. This won’t work because this is still a custom integration where the remote core decides on which port the integration runs and ignores driver_url as it’s running the driver locally anyway.
External integrations don’t need to be packed as an archive file and uploaded to the remote. You just need to register them using the core api or the web configurator on the real remote hardware. The integration itself has to be run on an external device hence the name “external” integration. Also make sure to not try to run two integrations with the same id at the same time (external or custom). The core won’t allow this.
After three days of troubleshooting, I’m still facing connection issues with my FYTA integration for Remote Two. I hope someone with experience can help.
My Approach:
Created a Node.js integration using the @unfoldedcircle/integration-api package (v0.3.0)
Using the official Unfolded Circle Integration API to create a WebSocket server
Server binds to all interfaces (0.0.0.0:8766)
Created registration scripts that attempt connection via multiple URLs
The Problem:
Integration server starts correctly (verified with logs)
All registration attempts from Docker result in “Driver not connected. TimedOut” errors
Connections to ws://192.168.68.61:8766 (my Mac IP) consistently fail
Port 8766 is accessible (verified with a test HTTP server on port 9766)
macOS firewall is disabled
What I’ve Tried:
Used 0.0.0.0 for binding to all interfaces
Created a minimal test WebSocket server that also fails
Modified registration scripts to attempt multiple URLs (host.docker.internal, localhost, direct IP)
Port connectivity tests show the ports are accessible
Docker appears to be running but can’t establish WebSocket connections
Logs show:
WebSocket server starts successfully
No incoming connection attempts are logged
“WebSocket clients: Not connected” messages appear periodically
Is anyone successfully running integrations on macOS with Docker? Could there be something specific about WebSocket connections between Docker and macOS that I’m missing?
I checked your integration on the remote hardware and also had the same websockets connection issues. So it must be something wrong with your integration not connecting to the remote core and not a Docker issue. Have you tried different ports?
Your simple-register script also has some issues. You’re using a non existing api endpoint and tried to include the driver url in the enpoint url. My example script shows you how to register the driver. Instead of creating new driver metadata just use the content or your driver.json and add the driver url to it.
Looking at your code again and comparing it with the examples form UCs node library it seems to be missing the api.init(driver.json) function. If this is missing the websockets server is not started. When the log message appears that the server is started nothing has been done with the api yet at that point in the code.
Hey @kennymc.c, Thanks for staying up until 1 AM German time to provide insights!
The Core Issue: WebSocket Event Handling
The main problem was with how the integration handled WebSocket events from Remote Two. Looking at logs, I discovered that:
The Remote Two simulator sends raw WebSocket messages like setup_driver with credentials
The IntegrationAPI instance wasn’t properly translating these into the high-level events my handlers expected
This resulted in timeouts during setup, as the API calls to FYTA were never completed
My Solution: Direct WebSocket Event Handling
Rather than relying solely on the IntegrationAPI’s event routing, I implemented a direct WebSocket event interceptor:
driver.js
// Direct handling of setup_driver event
if (event === 'setup_driver' && args.length >= 2) {
console.log(`DIRECT: Direct handling of setup_driver event`);
const sessionInfo = args[0];
const setupData = args[1];
handleSetupDirectly(sessionInfo, setupData);
}
This approach:
Directly catches the raw WebSocket messages
Processes them immediately before timeout occurs
Returns quick success responses to Remote Two
Continues FYTA authentication in the background
I’ve retained the IntegrationAPI for entity management but bypassed its event processing for critical actions like setup.
Current Status
The integration is now successfully:
Connecting to Remote Two
Processing user credentials immediately
Authenticating with FYTA in the background
Retrieving plant data (12 plants detected)
Creating sensor entities with moisture, temperature, and light values
I’ve committed the working code to GitHub with these changes.
However, I still see timeout/abort messages from Remote Two, likely because it expects responses in a different format than what the IntegrationAPI is sending.
Questions
Given that the integration is now working (retrieving plant data), I’d like to know:
What’s the best way to handle the timeout messages? Should I completely bypass the IntegrationAPI and implement my own WebSocket server?
Is using jq in the registration script the preferred approach?
What’s the recommended approach for entity refreshing? Currently using background timers to periodically fetch updates from FYTA.
I’m quite confused about how credentials are being handled. For the video, I wanted to record the full setup flow from scratch, so I:
Reset the simulator to its original state
Removed all simulator files
Deleted the configuration files used by simple-start.sh
Started everything fresh
But somehow, my configuration data is STILL there! The integration immediately connects without me entering credentials. It’s as if the credentials are stored somewhere I can’t find.
Is this something with the integration API? Does it store credentials somewhere under my home directory that I’m not aware of?
The video also shows an error I’m getting when trying to add an Entity. When I run the integration, it shows an error when trying to update the entity attributes.
Any support would be greatly appreciated. I’m confused about where the sensor data and credential data are stored from my node server. Does the Remote Two API maintain some kind of persistent storage that survives even complete resets?