Why don't you use Cura slicer?
-
As far as I know, even with support turned on for bridges, it will still use the modified speed settings for the bridging sections. I'm not at my printer right now so I can't verify.
But I do have access to slic3r right now. I made a simple STL to test. https://www.tinkercad.com/things/gXWCAMR5SEE
It looks like for some reason even with support for bridging turned off it still adds a support for the outermost wall. I've never noticed that before. With Don't support bridges checked, and only on build plate unchecked, it will add supports on both outer walls, but not under the bridge. With only on build plate checked, it will add support for just the outer wall over the build plate.
It might be easiest to download Slic3r PE and see how it behaves rather than me describing it.
-
As far as I know, even with support turned on for bridges, it will still use the modified speed settings for the bridging sections. I'm not at my printer right now so I can't verify.
But I do have access to slic3r right now. I made a simple STL to test. https://www.tinkercad.com/things/gXWCAMR5SEE
It looks like for some reason even with support for bridging turned off it still adds a support for the outermost wall. I've never noticed that before. With Don't support bridges checked, and only on build plate unchecked, it will add supports on both outer walls, but not under the bridge. With only on build plate checked, it will add support for just the outer wall over the build plate.
It might be easiest to download Slic3r PE and see how it behaves rather than me describing it.
Also, I should say that Slic3r may not be doing it the best way either. The only thing it had going for it in my books was cooling control during bridges and speed control for more corner cases.
-
Cura is in my opinion slightly better than Simplify3d. The biggest problems are the lack of custom support placement and the lack of simple things like temp and fan settings per layer numbers. There is a ton of adjustments that could be replaced by simpler ones, especially what comes to cooling. At my customer I print mainly with UMs and there is no way to poke around with thw exported gcode afterwards to set these things like "turn the fan on 70% from layer a to b".
-
+1 for custom support placement / removal in cura. Surely has to be the no. 1 feature request.
-
I try to use cura last week. A nigthmare to setting the machine and print settings for finally not have z offset option.
-
Maybe this is a bit unfair as I just tested cura right now for the first time (because of this thread) coming from s3d.
Parts layout is not very nice. I could not find any way to drop the part to the bed and had to resort to rotating and translating manually. This is way to clumsy. A basic drop face to bed needs to be implemented. Auto arrange placed parts outside the build volume
In the gcode preview I accidentally moved a model which wiped out the gcode I was reviewing.
No manual supports (yes I knew that, this is why I really didn't bother with it until today), and I do not se that i have any additional control over supports to make up for it. I use manual supports in 90% of my models (Most of the rest need no supports). I had to go hunting in the menus to actually get any semblance of controls to be visible. The usual problem I use manual support to solve is that I have parts with gaps that are too cramped to be able to extract supports from but can be bridged or can tolerate a little drooping, while I have other areas that needs the support. Hence the lack of control of the support is pretty much a dealbreaker.
I could not find any way to save the print settings changes I did as a profile. I guess there is a way, but it is not obvious. Clicking the create button in manage materials (which is where I thought it would be) does not appear to do anything.
-
I would like to use Cura, but it crashes under linux.
That's been my experience, too. Like you, I'm using Ubuntu 16.04. (With the latest kernel, or at least close to it, as provided by Canonical.) Worse, Cura not only crashes, it hangs the X GUI, and sometimes even hangs the whole computer! Those are impressive feats for any Linux program, and not in a good way. Because the computer I generally use for slicing is my main workstation, with dozens of programs open at any time and, typically, an uptime measured in weeks if not months (it's currently 25 days), I'm very reluctant to experiment with new versions of Cura in the hopes that this problem has been fixed, or to explore Cura more fully to determine if it might have features I'd find useful. (The latest version I've tried is 3.0.4; I see that 3.2.0 is now available.)
Beyond that, Cura is slow as a sloth. It takes longer to start up (I just timed it – 55 seconds; Slic3r and ideaMaker both take 2 seconds) than any other slicer I've tried. I don't recall the details, but it's pretty sluggish to actually slice models once it's running, too. I have other, more minor, qualms with it, but they're nothing compared to crashing my computer, which is inexcusable.
I use either Slic3r or ideaMaker for most of my slicing. Between the two, they get the job done. Slic3r's strengths include a UI that, although a little hard to understand at first, is very intuitive once you get going; and good options for infill and top/bottom layer patterns. Its weaknesses include sluggish performance and a tendency to crash (but without taking anything else with it). IdeaMaker is very fast and reliable, in my experience. It also provides manual support structures, which is a must-have feature for some prints. On the down side, ideaMaker doesn't explicitly support delta configurations (there are workarounds, but the slicer thinks the delta's bed is square), and I had to tweak my startup g-code a bit more than usual to get it to work with my Duet-based delta printer. Until recently, infill options were limited, but the latest version (3.1.0) adds hexagonal and triangular infill.
Slic3r is fully open source and is available in Ubuntu's repositories, which is a plus for an Ubuntu user. IdeaMaker is not open source, but it is available as a native Linux application in a Debian package, so it installs pretty easily. Cura is the worst of these from a Linux package management point of view (although it is open source, and so could easily qualify for packaging); AFAIK, it comes only as a platform-independent .AppImage file. This distribution format has its advantages for a software developer, but when an OS provides high-level package management tools (as almost all Linux distributions do), not using them is a drawback for the user.
-
Cura built from source runs very reliably under Linux so the problem must be some incompatibility or defect in their packaged releases or your system. It would help everyone if you could post an issue with as much debug info (cura.log, stderr, stdout, etc.) as possible at https://github.com/Ultimaker/Cura/issues.
-
I didn't have any crash with Cura (using .app), neither with Slic3r. And I'm running a debian sid (unstable)! If both crash on Ubuntu, this is a Ubuntu issue.
-
I could not find any way to drop the part to the bed and had to resort to rotating and translating manually. This is way to clumsy. A basic drop face to bed needs to be implemented.
It's there as standard and (AFIK) enabled by default (you found the app preferences menu yesno?)
Auto arrange placed parts outside the build volume
I'd forgotten about that gem; yes, autoarrange appears to totally ignore the build area. And will happily move parts from a printable position to one outside the bed. It's very randomly too; there is no coherent logic I can ever see in the decisions auto-arrange makes.
-
I would like to use Cura, but it crashes under linux.
That's been my experience, too. Like you, I'm using Ubuntu 16.04.
Upgrade; you are 18 months out of date in a fast-moving world. Appimage/QT has matured a lot over that time.
@srs5694:Cura is the worst of these from a Linux package management point of view (although it is open source, and so could easily qualify for packaging); AFAIK, it comes only as a platform-independent .AppImage file.
Appimage IS packaging. Distribution independent code containerization.
@srs5694:This distribution format has its advantages for a software developer, but when an OS provides high-level package management tools (as almost all Linux distributions do), not using them is a drawback for the user.
I'd argue it is so simple to use (download, make executable, run) that making the user install a package (open software centre, find, click, supply password, install, close software centre, find app and run; or sudo dnf/yum/apt install <must know="" the="" exact="" package="" name="" here="">) is not really that much simpler; just more familiar.
[VERY TL;DR]
Add the fact that each of these distribution-specific packages needs to be properly maintained; tracking every OS /and/ packages update. It's a lot of effort for the Buildmasters and devops guys to keep up with; appimages give a single solution across a huge number of target platforms and distros; they are the future.- I say this as someone who currently makes his money doing cloud application packaging/building into RPM's for 'traditional' software deployment; and who looks as appimages as the best replacement tech for application distribution (but emphatically not OS or System components) to come along so far.
They also have a HUGE disadvantage; and I think this will kill them over time/ or force some sort of update/patch system to be added.. Since all the libraries they use are pre-packaged there is no possibility of retroactively upgrading individual items.
For instance; if you bundle LibreSSLv1.2.3, you are stuck with it.. if it is found vulnerable at some future time you will need to make a whole new appimage to bundle the newer libs. Fine if the project is very agile and quickly updates; but a nightmare for other scenarios, or those who are slow to upgrade.- But, this is not a unique problem. I also see many traditionally packaged apps that statically link crucial libraries to make life easier for developers (part of my job involves re-engineering CentOS rpm's for our internal purposes; I read a lot of %build sections in rpm specfiles; this happens far more often than I would like. I think I can argue that appimages are in one sense better than other packages since there is no pretense that they use libraries outside their domain.</must>
-
I would like to use Cura, but it crashes under linux.
That's been my experience, too. Like you, I'm using Ubuntu 16.04.
Upgrade; you are 18 months out of date in a fast-moving world.
Ubuntu 16.04 is the current latest long-term support (LTS) version of Ubuntu. It's designed for stability, as opposed to the non-LTS releases, three of which have been released since 16.04 came out (16.10, 17.04, and 17.10). The non-LTS releases do have more up-to-date software, but they are not supported for as long and they are more likely to contain bugs. Except in rare circumstances, advice to "upgrade" from the most recent LTS to a non-LTS release of Ubuntu is ill-advised. If running the latest version of a program is critical for a computer, that may be an exception to the rule (although even then, if it's just one program, there may be other options).
Note: I'm employed by Canonical, so I'm very familiar with the Ubuntu release model, and I use an LTS release on my main desktop for good reason.
Cura is the worst of these from a Linux package management point of view (although it is open source, and so could easily qualify for packaging); AFAIK, it comes only as a platform-independent .AppImage file.
Appimage IS packaging. Distribution independent code containerization.
Yes, AppImage is packaging in the generic sense, but it is not the Debian packaging used by Ubuntu (or other package systems, like RPM, used by some other distributions), and that's clearly what I meant.
This distribution format has its advantages for a software developer, but when an OS provides high-level package management tools (as almost all Linux distributions do), not using them is a drawback for the user.
I'd argue it is so simple to use (download, make executable, run) that making the user install a package (open software centre, find, click, supply password, install, close software centre, find app and run; or sudo dnf/yum/apt install <must know="" the="" exact="" package="" name="" here="">) is not really that much simpler; just more familiar.</must>
First, you're overstating the difficulties of installing a package via a package system – or perhaps understating the difficulties of installing an AppImage file. Try this for the latter: Find the URL to download, download, copy the downloaded file to a location on your path (which requires entering your password or becoming root, unless of course you install somewhere that's not a system path, which has some severe problems), create a symlink so the name is less awkward, repeat every time a new release becomes available. Phrased in this way, the AppImage approach doesn't seem so great, does it?
Second, one of the major points of a package system in the Linux software ecosystem is to standardize package installation, which you dismiss with the phrase "just more familiar." Imagine having to independently install every one of the hundreds or thousands of programs that make up an OS in the way that Cura must (AFAIK) be installed on Ubuntu. Even managing just one or two dozen programs in this way would be a pain. Package systems make this very easy by comparison. You install it once and forget it; you needn't bother with pulling down updates for bug fixes and the like, because the system will do it automatically or semi-automatically as part of general OS updates. (A caveat is that an Ubuntu LTS release isn't likely to upgrade the software version just for the heck of it, but it will update to install bug fixes and security updates. If you want the latest and greatest feature, you'll need to upgrade manually in one way or another.)
Third, you're looking at it from the perspective of a single program. That's valid, but it's a very narrow outlook. Most computers will have dozens or hundreds of programs installed, even outside of core OS programs. Having one package system to maintain them all is a huge advantage over a mish-mash of different package formats, URLs to visit for updates, installation procedures, etc. As a user, every program that deviates from a well-established package system's standards increases the difficulty of maintaining my computer.
Finally, even if the software is not in the distribution's package system, but is available as a package file (RPM, Debian package, etc.), there are advantages to the user over an AppImage file. Package systems ensure that there are no filename conflicts and that the package being installed is compatible with other system components – so if the application requires LibraryA version x.y.z or later, the package system will catch this and block the installation (with a suitable warning) if that's not the case. Packages can include documentation, system-wide configuration files, etc. Packages enable other packages to rely on them (admittedly maybe not important for Cura, but who knows….).
Add the fact that each of these distribution-specific packages needs to be properly maintained; tracking every OS /and/ packages update. It's a lot of effort for the Buildmasters and devops guys to keep up with; appimages give a single solution across a huge number of target platforms and distros; they are the future.
You're correct that it's easier for the developer to release a program using a single file format than to create separate RPMs, Debian packages, etc., not to mention binaries for MacOS and Windows; however…
-
By not creating separate OS-specific packages, you're offloading work onto the user. This may not be a big deal in the case of one program, but if you rely on that fact, and the next software developer does, and so on, pretty soon you're into tragedy of the commons territory and users will move on to a platform where these things are handled properly. Also, every one of your users will be inconvenienced by this offloading. Thus, it's not a question of balancing Time X by the developer and Time Y by the user; it's Time Y for each user multiplied by Z number of users.
-
As a corollary to the preceding, speaking solely in my role as a user, I don't care whether it's easier or harder for a developer to release in one format or another; I want to use what's best for me and my computer. For most current Linux distributions, that's whatever package system that distribution uses. Period.
-
In my experience, slicers distributed in AppImage form are the slowest and least reliable (although Slic3r isn't exactly a speed demon, either). I haven't looked very closely, but I suspect this is because they're largely interpreted, whereas faster slicers, like ideaMaker, are compiled. I may be off on this, though; I know very little about AppImage internals or how Cura (or the other AppImage-using slicers I've tried) runs. I just know that Cura is not just slow, but extraordinarily slow.
-
Distribution maintainers have teams of people who work to help get applications packaged and into distributions' repositories. As Cura is open source, it qualifies for that. This would not be zero effort, but Cura's developers could certainly reach out to Debian, Red Hat, and others to get Cura into the relevant OS repositories. (I'm not involved enough in Debian packaging to be of all that much help in this, but I can provide some pointers to help connect Cura developers to the Debian packaging chain, if necessary – send me a PM if interested.)
As for AppImage being "the future" of package distribution: I sincerely hope not. They're a huge step backward from what I've seen, at least from the user's point of view.
- I say this as someone who currently makes his money doing cloud application packaging/building into RPM's for 'traditional' software deployment; and who looks as appimages as the best replacement tech for application distribution (but emphatically not OS or System components) to come along so far.
My perspective is as both a user and a developer – I've written the GPT fdisk (aka gdisk) partitioning tool and I maintain the rEFInd boot manager, both of which are in the Debian repository (and therefore Ubuntu). Others have done most of the packaging work for both of these, although I was involved in getting rEFInd into the Debian package system. GPT fdisk is available in Red Hat and most other distributions, too, but rEFInd's uptake is a bit spottier outside of Debian and its derivatives. The point is that I do understand the difficulties of creating an RPM or Debian package, and the even greater hurdles involved in getting a package into a distribution's repositories. These tasks are not trivial; but in terms of creating a package, most of the pain is up-front, in learning the packaging system. Once it's prepped, subsequent updates are relatively easy to do, at least to build the package itself. (Debian, at least, has procedural hurdles involved in pushing through updates that appear in its repositories.)
They also have a HUGE disadvantage; and I think this will kill them over time/ or force some sort of update/patch system to be added.. Since all the libraries they use are pre-packaged there is no possibility of retroactively upgrading individual items.
For instance; if you bundle LibreSSLv1.2.3, you are stuck with it.. if it is found vulnerable at some future time you will need to make a whole new appimage to bundle the newer libs. Fine if the project is very agile and quickly updates; but a nightmare for other scenarios, or those who are slow to upgrade.I'd not considered this, but with this point in mind, I'm likely to delete most or all of the the AppImage-based slicers I have installed. The security risk is simply too great, especially for a program I don't bother to explicitly upgrade on a regular basis.
In sum, your arguments come across as being very developer-centric – AppImage is easier for you as a developer. I acknowledged as such in my original post. From a Linux user's point of view, though, a Linux package system (which does not include AppImage format, as I define "package system") is far superior. The title of this thread is "Why don't you use Cura slicer," and implicit in that title is a user-centric point of view. I've presented mine. You may not like to hear it, and Cura's developers may have other priorities, and I'm not trying to be judgmental, if that's so; but for me, other slicers do what Cura does not do – and most importantly, they don't do what Cura does do (namely, take down my entire computer, which is otherwise rock solid).
-
-
FYI, the front end (UI) in Cura is written in Python and the actual slicer that takes the models and generates the gcode is written in C++. Personally, I only work on the C++ part and the UI part is completely black magic to me. The slicer itself really isn't that slow for most models but it does depend on lots of factors. Some combinations of model features and settings will take longer to slice, that's for sure. I am partly guilty inasmuch as a lot of the stuff I have done for Cura has been to improve the quality of the gcode and that often (but not always) involves longer slicing times. I appreciate that gross slowdowns are not acceptable and a lot of my time is spent in finding quicker ways of achieving good results. In fact, I have spent most of today working on improving the speed of the infill line order optimization because it's too slow when there are many thousands of infill/skin lines to consider.
-
I appreciate the thoroughness of that reply This is so offtopic, but worth discussing.
I would prefer if Canonical staff could aggressively dogfood their own product, and that means updating as a priority; to hear otherwise is worrying.
I had a couple of crashes of the appimage with v3.0.3, nothing with 3.1 or 3.2 beta. None of them took out my base OS; nor would I expect them to; I lay the blame for that firmly on a LTS version of Ubuntu running 'a recent' kernel from a filesystem developer; and probably some other hacks that are not being mentioned. If I was doing tech support your case would be firmly closed with a 'please come back to us once you can reproduce on vanilla Ubuntu' note.
I make a clear distinction between OS and Applications here; for the OS the current package management systems are working well, and allow fine-grained continual upgrading, which is crucial for these components and security. Appimages are not going to replace these, nor have any ambitions to do so https://appimage.github.io/apps/.
In an imaginary future; Cura 9.x gets released; at which point someone needs to package it for Ubuntu, someone(else?) packages it for RHEL, someone for AUR, someone for Portage, and so on.. The debian packager needs to consider Debian, Ubuntu (current and LTS) raspian, what else?. The RHEL packager needs to consider three supported RHEL versions, corresponding CentOS versions, Fedora, Scientific Linux, others… Meanwhile something similar is happening for AUR, SuSE, portage, etc.
- But hang on; Cura 9.x uses Python6.66 which is in the latest headline releases of Ubuntu and Fedora, but never going to be available for older releases, so those users are either SOL, or you need to embed it in the package.
- This is 'the old way' and I sincerely hope we wont see it much longer.
One demonstrable upshot is that if you try 'apt install cura'; it does not work. As of this writing only the engine is available, at v 14, but no GUI. On Fedora 'dnf install cura' does somewhat better, you get the GUI and engine v 15. Arch (hilariously) has at least 10 cura packages; some building from Git; so you should be able to get 3.0+ on that but may have to work your way through the AUR packages until you find the one that is most sane. And remember; Appimages can be put inside other package systems; they are just an executable.. and that's what package managers do, install executables. Yet even this is not being done, at least not by the community or creators.
Devops need solutions to keeping the apps rolling to the untechnical masses; appimages, docker, flatpack and others are those solutions, get used to them, they are going to supersede apt and co. A simple user-facing appimage manager will emerge, appstores get integration, possibly it will move to a Docker like model capable of pulling images from distributed repositories and managing them in a local store, or it evolves into some entirely unforeseen way.. agile.
You had a list of objections that appear to stem from objectioneering to new things. Appimages are not running as root, nor running as low UID services Nor can they elevate their own privileges.. They can see what the invoking user sees and nothing else; they cannot write over system files or configs, and must store their own config in a user writable location, usually the invoking users homedir. If one ever required or asked to run as root I'd be deleting it in an instant. And since the alternatives need to be installed by running installers as root; which will then make config changes and plonk binary packages deep into the filesystem while blindly executing whatever scripts it is asked to. I think appimages win very convincingly in the security and convenience stakes.
Now think about that from Ultimakers perspective; and understand why they are going this road. it means that no admin access is required to install and run their app so long as it stays within the users sandbox. And you only need to make+distribute one package to achieve this across dozens of distributions.
–--------------------------------
Finally; I'm not a developer; quite the reverse. My day job is ensuring that our developers make packages (RPM's) that install properly, upgrade/downgrade, etc. so that the customer types a single 'yum install' command to get started. [and then spends five weeks ankle-deep in ansible issues as he tries to deploy hundreds of instances of our software, each one slightly differently configured, across a number of data centers. Our cloud platform IS easy to install and IS a nightmare to configure, that is why it costs meeelions and we insist you buy consultancy.. ]
-
I would prefer if Canonical staff could aggressively dogfood their own product, and that means updating as a priority; to hear otherwise is worrying.
You didn't hear otherwise. I said that I was running the latest LTS release on one of my personal systems. I said nothing about other systems I use – and there are many of them. Furthermore, since LTS releases are maintained in the long term, and receive bug fixes and security updates, those LTS releases must also be aggressively tested.
If I was doing tech support your case would be firmly closed with a 'please come back to us once you can reproduce on vanilla Ubuntu' note.
To which I'd reply that I was running "vanilla Ubuntu!" I said so in my post. Perhaps you misinterpreted when I wrote that I was using "the latest kernel, or at least close to it, as provided by Canonical" – I meant that I'm using a stock kernel as delivered with Ubuntu.
Appimages are not running as root, nor running as low UID services Nor can they elevate their own privileges.. They can see what the invoking user sees and nothing else; they cannot write over system files or configs, and must store their own config in a user writable location, usually the invoking users homedir. If one ever required or asked to run as root I'd be deleting it in an instant. And since the alternatives need to be installed by running installers as root; which will then make config changes and plonk binary packages deep into the filesystem while blindly executing whatever scripts it is asked to. I think appimages win very convincingly in the security and convenience stakes.
I never said anything about running Cura (or any AppImage) as root, although I did refer to using sudo or root to install it in a sensible place in the filesystem. (Putting binaries in a user's home directory is so cringe-worthy from a Unix/Linux perspective that it doesn't merit serious conversation!)
Now think about that from Ultimakers perspective; and understand why they are going this road. it means that no admin access is required to install and run their app so long as it stays within the users sandbox. And you only need to make+distribute one package to achieve this across dozens of distributions.
Yes, I understand this; it's an effort to reduce developer effort, at the cost of deviating significantly from the software distribution model used by the host OSes. From my perspective, that's a sub-optimal solution at best. Particularly if you're advocating putting the binary in users' home directories, it looks like taking a huge step backward to the days of DOS, when people intermingled program files, user data, and so on, with a need to manually update everything. There's a reason things have been moving away from that model for years, and that it was never used in the Unix world.
I suggest you stop replying now; if anything, you're making me think worse of AppImage as an application-delivery format.
Oh, and I've deleted Cura from my system.
-
No manual supports. This is definitely the biggest problem of Cura!
-
+1 for the manual supports; either a way to specify pro-actively what you want supported, or a tool to selevively delete areas of autosupport. Either would work for me.
-
Perhaps you misinterpreted when I wrote that I was using "the latest kernel, or at least close to it, as provided by Canonical" -
Yes, that. I read tha as 'my own non-stock kernel'? given you work on filesystems this seems highly likely. Obviously you intended otherwise. I suggest your experience is very atypical of the customers Cura targets.
I suggest you stop replying now; if anything, you're making me think worse of AppImage as an application-delivery format.
But I'm not trying to convince you, I'm mostly making sure that I have things straight in my own head. Remember; I'm about to lead a team of agile and competent developers away from a hundred ish RPM packages and into one appimage, or possibly five docker continers. I've also got to lead testers and others; and have the arguments ready for objectioneers. So here goes:
The big disconnect here; you are assuming that the only way people should use Cura is by installing deeply into a system and making that available to all users. This is 'the true unix way'; only heretics want to 'break' this perfect solution that has been proven by 20 years of use.
But here in reality; Along comes a competent developer and app author; they write something agile, totally up-to-date, in a architecture neutral toolset (Python/qt, for instance) and can then build their product for Windows, Mac, Unix. It;s multimedia; they use the very latest libs they can find.
- Distribution to the vast majority of customers is therefore simple and quick; the windows users get a standard installer.. Bingo; 80% of your customers happy that afternoon.
- Then you create a mac installer; another simple and well documented process; and Hurrah; 18% more of your customers happy the next day for very little effort.
Please work out the rest of the process if done via 'the true linux way'; try to grock the scope and scale of providing a Unix package + dependencies just for Ubuntu; then scale that out to other distributions (unless you are one of those Debianaholics who pretends other distros don't exist). Think of the man hours involved and the deep skillsets needed.. all for a small fraction of customers; and ones who habitually don't pay for software. There are other Unixes out there that are NOT Linux too; hpux, suse, are very heavily used by the engineering industry; one of the target customer groups.
in reality we should be very grateful to Ultimaker for providing this tool on Linux at all.. and grateful to the appimage/flatpack/snap crowd for enabling this. The alternative is the Simplify3d route; where you have a self-installing tarball, and a series of scripts to run as a superuser, or other scripts for a normal user. Oh and pay 150 moneyunits for that.
Your employers, Canonical, are slow at providing updates to common libraries in a timely manner; (and RedHat; who I am tied to professionally, are worse). This is at a point where it is holding Linux back; the application landscape on Windows is much richer and nicer than that on Linux, and the gap, which had narrowed considerably, is widening again as MS pick their game up.
For these reasons you will see more containers; appimages, snaps and flatpacks; you will see apps switching away from APT/RPM/AUR/Portage and into containers. This will accelerate; you can either learn to ride the wave; or go under.
Oh and;
I refer you, please, yet again, to the concept of an Operating System vs User Applications; I think user applications should absolutely live in user home directories and be confined to that by the OS. If written properly that is all a user application ever needs, has advantages aplenty for security and performance, and no downside apart from making users do their own install (which with a appimage can be as simple as doubleclicking an icon in a web browser).Oh, and I've deleted Cura from my system.
Your loss; I'll keep using it thanks.
Edit: Bad language.
-
Man, what a lot of words have flowed here recently. All good stuff, I'm sure.
-
Apologies @burtoogle for hijacking the thread; I'll shut up on that now..
To be more ontopic; as a newbie I found Cura's slicing excellent, in particular the way it handles vase mode made me switch to it from slic3r. A bit more control of supports would be nice. And that is all I can find to comment on regarding how it slices models, the missing polygon issue I saw in a model with 3.0x appears resolved in 3.1 and 3.2.