Jump to content

rgg

Members
  • Content Count

    22
  • Joined

  • Last visited

Community Reputation

2 Neutral

My Information

  • Agent Count
    200+

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. We were having trouble managing workstations, especially laptops, because they were going offline overnight. This monitor/autofix setup has drastically improved the situation. Components: Install and Apply Power Plan [function script] This creates and runs a powershell script to download a .pow file, install the power plan, and apply it. This assumes that @powerplanFileSource@ has been defined and points to a .pow file in the LTShare transfer folder. So if your powerplan file is \LTShare\Transfers\PowerPlans\nosleep.pow, you will have defined powerPlanFileSource = PowerPlans\nosleep.pow This sets a variable @installAndApplyPowerPlanResult@ = success upon success, so you can check the result after calling it. Apply Power Plan [function script] This creates and runs a powershell script to apply an already installed power plan This assumes that @powerPlanName@ has been defined and is the power plan it should apply to the computer This sets a variable @applyPowerPlanResult@ = success upon success, so you can check the result after calling it. Apply [YOUR POWER PLAN NAME] [script] This script conditionally runs the two function scripts above. You set the required variables in lines 2 and 3, and it will check to see if the plan is installed or not and act accordingly. This sets a variable @autofixResult@ = success upon success, so you can check it after calling it. ~Autofix incorrect power plan [script] This is an autofix script to be called by a monitor. If called, it will run the Apply [YOUR POWER PLAN NAME] script. If the script is successful, we're fine. If the script fails, it will create a ticket with subject and body defined by lines 2 and 3 of the Then section, and if the monitor succeeds it will close the ticket with the note defined by line 2 of the Else section. On Incorrect Power Plan [monitor] This is a RAWSQL monitor that fails if your power plan isn't applied, and will be configured to use an alert template executing ~Autofix incorreect power plan. Configuration Create your power plan On a laptop, set up the desired power configuration, including lid actions. Save it with a name you want your clients to see if they go looking at their power plan. Get the GUID of your power plan with the powershell command powercfg /List Export the power plan to a .pow file with the powershell command powercfg -export "%UserProfile%\Desktop\MyPowerPlan.pow" GUID (GUID is the GUID from the previous step) Move MyPowerPlan.pow somewhere in your LTShare\Transfer Import the attached files into Automate Modify the Apply [YOUR POWER PLAN NAME] script Rename it and change the Notes section as needed Set lines 2 and 3 to the correct values for the power plan you created and the file you exported Ensure line 24 runs the "Install and Apply Power Plan script Ensure line 34 runs the "Apply Power Plan script Modify the ~Autofix incorrect power plan script Set lines 2 and 3 of the Then section and line 2 of the Else section as desired Ensure line 13 points to the Apply [YOUR POWER PLAN NAME] script Modify the On Incorrect Power Plan monitor In Configuration>Additional Condition, change pp.currentPlan != "[YOUR POWER PLAN NAME]" so it references the name of the power plan you created in step 1 (no brackets) In Configuration>Additional Condition, change WHERE AgentID=[YOUR MONINTOR ID] with the monitor id (this is set upon import) Create an alert template Go to Automation>Templates>Alert Templates (assuming automate 12) Click on New Template Name it as you like Add an alert to run the ~Autofix incorrect power plan script, applied every day all day Now it's just a vanilla monitor setup where you enable the monitor for whatever groups you want (e.g. Patching.Patch Install - Workstations, Service Plans.Windows Workstations.Managed 24x7) and set it to use the alert template you created in step 6. -rgg *thanks to @Gavsto for his rawsql writeup. It's so good I just open it by default every time I'm starting a RAWSQL monitor. ~Autofix incorrect power plan.xml Apply [YOUR POWER PLAN NAME].xml Apply Power Plan.xml incorrect_powerplan_monitor.sql Install and Apply Power Plan.xml
  2. If you want to bill for patching accurately, you can use the attached scripts. Create two computer level EDFs, "Patching Ticket ID" and "Patching Start Time". Both are text fields and can be set to read-only with default values of 0 Modify the scripts to point to them. Patching Ticket - Start: Lines 26, 27 Patching Ticket - Finish: Lines 15, 16, 49, 50 If you aren't using CW Manage, you'll also want to remove lines 18-21 of "Patching Ticket - Start" In the Patch Manager under Microsoft Update Policies, modify each policy you want to report time for. Set "Patching Ticket - Start" as the script to run before install Set "Patching Ticket - Finish" as the script to run after install/before reboot window. Patching Ticket - Finish.xml Patching Ticket - Start.xml
  3. Assuming A,B,C,D are clientid values, try: Software.ComputerID NOT IN (SELECT ComputerID FROM Software WHERE `Name` LIKE '%DeskDirector%') AND Computers.LastContact > DATE_ADD(NOW(),INTERVAL -15 MINUTE) AND Software.ComputerID IN (SELECT ComputerID FROM Computers WHERE ClientID IN (A,B,C,D)) Side note, limiting by last contact to 15 minutes is probably overkill since software inventory is generally stable.
  4. rgg

    Password Reset Web Portal

    The SQL for the password encryption (it's not a hash since it needs to be reversible) is AES_Encrypt('PLAINTEXTPASSWORD',SHA(CONCAT(' ',ClientID+1))) There's no REST API for LT/Automate so you'll need to run the command against the database.
  5. Sure, exposing 3306 is not a great example. Better: are there any plans to make https://XXX.hostedrmm.com/LabTech/ControlCenter.aspx available for cloud users?
  6. @TheCloudGuy, the caveats you give are exactly the limitations I'm referring to. we can't download plugins from labtechgeek or make our own, and we can't use port 3306 to access our data and connect it to other tools we have (e.g. Slack or Teams integration, integration with our customer portal, etc.)
  7. We're currently cloud-hosted, and while it's nice to have someone else managing the server I'm getting to the point where the restrictions (no external plugins and in particular no remote access to the DB) are too onerous to go on. Has anyone recently moved their Automate server to AWS (or another cloud service)? I'm curious what your experience was and what particular package you moved to. Thanks!
  8. rgg

    Can You edit Screen Connect Preferences?

    Are you cloud hosted or self hosted?
  9. The LT11 Reporting Center is hard to use. Let's say you have a computer-level checkbox EDF and you want a pie chart that shows what percentage of computers at a client have that box checked. I spent hours trying to get Data Sources and Relationships to work correctly, and while I did get to where I could get the right percentages, I couldn't get a sensible legend. Poking around the example and template reports, it's apparent that the report devs don't do that - instead they just create reporting views and simply drop that data into charts. So here's how you do that. Create your EDF I created a computer-level checkbox EDF called "Binary Flag". Then I went to a client with two devices, ticked one and not the other. So my report should show 50% checked and 50% unchecked. Write some SQL This is my SQL. I want a table with the total number of computers with a checked and unchecked status grouped by client. I'll need a column with the label I want in the legend (BinaryFieldValue), the data I'm reporting (computerCount), and something to attach to other client level data (clientid): SELECT a.clientid ,SUM(computers) AS computerCount ,binaryFlagValue FROM ( SELECT clientid ,CASE WHEN `Binary Flag` = 1 THEN "checked" ELSE "unchecked" END AS binaryFlagValue ,1 AS computers FROM computers LEFT JOIN v_extradatacomputers ON computers.computerid = v_extradatacomputers.computerid ) a LEFT JOIN clients c ON a.clientid = c.clientid GROUP BY a.clientid ,binaryFlagValue Use that code to create a data source Click on the "Edit Data Source" button Add a query Under "Source" be sure to select "Custom Query" Paste your SQL code (reduced to a single line) into the Custom SQL Query field [*]Relate your new table to the client data Click on "Add..." under relationships in the "Data Source Editor" window Link the data from the new table to the client data by clientid [*]Define your chart Add a chart, and in the chart wizard, select "Pie" as the chart type Go to the "Data" step in the wizard and click on the "Auto-Created Series" tab We're just interested in the "Argument Properties" (source of the labels) and the "Value Series" (source of the data) I'm not sure why but I can't upload a screen capture of this step, so I'll just be very wordy. Go back to the "Series" tab, and then click on the Legend Text Pattern vertical tab on the right. Your Pattern will be {A} and your Placeholders value is Argument. Leave the option to "Synchronize with point pattern unchecked. [*]Run your report The great thing about this is that it can be extended almost indefinitely. The general approach is to use SQL to define a table that you could use to create a table in Excel. Then dump that table into the Report Center, put in a simple relationship, and make your chart.
  10. By the way, the best way is to kill the OOTB groups: 1. Remove ALL approval policies from those groups in the Patching plugin. If you don't do this, you won't be able to delete those approval policies later. 2. Open the group "Patching" from the navigation tree and change the Type from Patching to Organizational. Reload system cache and refresh your Patching window, and all those group should be gone.
  11. I've recently revamped our patching with the LT11 patching tool, and here's what I did and why: 1. Killed all the OOTB groups. I did this for two reasons: a. Patching is important, and I want to know exactly how it's being applied. The best way to do this, given the time, is to put it together myself. b. The OOTB groups make no sense. The "Approvals - X" searches are less strict than the "Patch Install - X" searches. Why? LT support didn't know. Leaving the "Approvals - X" searches (and the fact that "Approvals" makes no sense as the name of a group of devices) aside, the "Patch Install - Servers - X" searches are inconsistent. Some use Computer.Location.Extra Data Field.Patching.Under Microsoft Patching Contract, others do not. In fact, the "Patch Install - Servers" search does not use that EDF, so a VMH would count as a server but not a VMH! LT support agreed that this was "weird". Throw it all out now. 2. Built my own searches. They are: All Managed Devices (this group gets the default approval policy applied) Workstations (this group gets daily patching scheduling) Laptops (this group gets daytime patching enabled) Server (this group gets Sunday patching scheduling) Servers - DC (this group get no special treatment at the moment, but might in the future) Servers - VMH (this group gets Saturday patching scheduling, so as not to conflict with hosted machines that will get the "Servers" group scheduling) 3. I set up deployment schedules and put a handful of workstations/servers into the Test group. I left the pilot phase empty, figuring the rest of the world is my pilot for the first 5 days. If no disasters are reported, the patches go to the Test group for 5 more days. If no problems come up, then it goes to production. I don't have any client-specific searches. I think 99% of the time this is not necessary and overly complicated. First, who wants to create a new search every time you sign a client? Second, how often do your clients really have different patching requirements? I think when there are special requirements there's almost always a better way to apply them. Is it because they have a specific application that breaks if a certain KB is applied? Then create a search for machines with that application. Now, there could be cases, like a client who insists on getting patches on Thursday or something, and that's fine. But as a rule I contend that client-specific groups are not the way to go.
  12. I've written a script that will run every morning before work and send out an email with any servers/locations that are currently down. The idea is to be able to wake up and know right away if we have an outage without having to log into anything. However, since the script is "global" and doesn't actually run on a specific computer, client, or other target (it just runs a SQL query, parses the results, and emails them), I'm not sure how to set up the target. Does anyone have anything similar with a good setup?
  13. This is a terrific write-up LoneWolf!
  14. These definitions worked for me. For workstations I resorted to path definitions and got away from the registry altogether - it really does look like LT can't process that info correctly. They are sufficient to accurately identify machines with Symantec Cloud installed and running. Workstations: Servers:
  15. You can control what LabTech/Automate values map to what ConnectWise/Manage values through the ConnectWise plugin in LabTech. Open the ConnectWise plugin Click on Agreement Mapping Click on "Asset Templates" Depending on your setup, you may have to create a Template at this point. Either way, you can now map "LabTech Product" to "ConnectWise Product" As a side note, if you ever need to create a new ConnectWise Product, you do that in ConnectWise under Procurement > Products
×