Yes, using any local agent gives an LLM access to your local file system. Think of it like giving access to your computer to a knowledgeable friend who knows how to perform nearly any task, but who can make mistakes. You need to give that friend very carefully crafted instructions (prompts) about the exact task you want completed, and you need to ensure the proper guardrails are in place to keep your friend from doing things that could wipe out important data. And you want to make sure you only give access to your computer to the smartest, most trustworthy friends :) (choose your LLMs carefully).
If you tell your friend to erase your hard drive, and they don't have any reason to not do that, and/or you don't have any protections in place to keep them from doing that, then yes, your friend could perform that irreversible, unintended action. There are plenty of stories about people erasing their databases, hard drives, email accounts, etc. using Openclaw and other agents - and Pi has even fewer limitations to keep you from performing potentially destructive operations.
But frontier LLMs are now very smart, and most agent harnesses have in place default guardrails such as sandboxed workspace folders, which may be the only folder(s) where the agent is allowed to read/write files, and they may have default limitations placed on their ability to interact with resources on the Internet, or with particular OS functionalities, etc.
All of those setting are generally configurable during the agent setup routine, and/or in manual configuration steps, to potentially enable any level of control for the agent system. You can also choose to, for example, provide credentials for private accounts and give your agent/LLM system access to data and third party processes, using tools & skills that you enable (Openclaw, for example, now can interact with Google Meet sessions). Be extremely careful with those sorts of interactions - I basically don't use agents to do any of that sort of work.
All agents come with a different set of tools preconfigured, and a different set of default permissions enabled. Pi has perhaps the fewest guardrails of all the well know agent systems, which is something I'm comfortable with, based on the sorts of work I do with it (mostly development work in a single folder on a wipeable dev machine), and I trust my own experience working with LLMs and computing systems in general.
I would never install an LLM in an environment with compliance requirements. I also don't provide credentials to accounts like email, and I tend to run agents in clean environments such as VPS accounts which can be wiped and re-installed instantly as needed, and which don't have access to private data.
I transfer the results of work which agents complete - the developed software and other artifacts created by an agentic process in an isolated environment - to other systems that the agent doesn't have access to. And of course, production code only gets released after testing in an isolated development environment that can be broken and iterated upon - just like in any sort of traditional software development workflow.
Nullclaw, as one example of an agentic system, starts out with access enabled to only a single folder, it has no Internet access, and very few permissions to do anything on the OS, which makes it virtually useless, until you configure required permissions.
Pi is the generally the opposite. It lets you do almost anything out of the box, so you can shoot yourself in the foot with it really easily, if you're not aware of how to be careful using it.
Security is the biggest concern with using any sort of agent. Most people know absolutely none of the fundamentals about security on the Internet. Be sure to learn as much as you can - it can be a long road!
The biggest thing that agents do for me is enable automated software development iterations. For example, instead of getting code from ChatGPT, running it, pasting debug errors back into the ChatGPT conversation, and repeating that process, a local agent can actually run the generated code, read the output errors, stop and start the application server, and edit the code - all autonomously, until the application is running error free, and with the intended functionality.
That sort of iterative automation is particularly useful when you're running locally hosted LLMs that aren't quite as capable of delivering perfect running code first-shot, like you can expect from frontier models such as Claude and GPT.
The ability for an LLM to test, run, and reason about fixing it's own imperfect code in an iterative loop, on a machine that it has control of, using a command line that it can see the I/O of, on a system where it can stop/start processes, read network data, etc., is a setup that enables automation which makes up for less than perfect code generation by LLMs.
Giving an LLM the ability to iterate autonomously can yield tremendous productivity gains in that part of the software development process, and it can be useful for many other sorts of mundane time-consuming work. There are lots of other sorts of projects which can be made many times more productive when intelligent autonomous iteration is enabled. I would not suggest giving your LLM/agent access to your emails and other private accounts, just to save time responding to people (especially if you're using commercial hosted LLM models). That's the sort of irresponsible application of agentic systems which gives them a bad rap.
Be careful and start slowly with a system that imposes a lot of guard rails by default, dive in carefully in an environment that can be destroyed, don't give your LLM any credentials (especially if you're using commercially hosted LLMs that may add them to their training data, or otherwise leak private information!), back up regularly as part of any automated process (some agent systems are built to use Git, which can be helpful - I have my own techniques), and learn how to be safe, from experience. Don't tell your agent to rm -r your root directory ;)